Zuplo's managed edge deployment has a 500MB request body size limit. For
applications that need to handle larger files, you can generate pre-signed S3
URLs that allow clients to upload directly to Amazon S3, bypassing the gateway
entirely.
Managed Dedicated
If you require larger request sizes you can consider Zuplo's
Managed Dedicated offering which allows custom
request size limits. Contact your Zuplo offering which allows custom request
size limits. Contact your Zuplo representative for more information.
This approach offers several benefits:
Upload files larger than 500MB
Reduce bandwidth costs and latency
Offload file transfer from your gateway
Maintain security through temporary, scoped upload permissions
Prerequisites
Before you begin, you need:
An AWS account with S3 access
An S3 bucket configured for your uploads
AWS credentials (Access Key ID and Secret Access Key) with S3 write
permissions
The AWS region where your bucket is located
Store your AWS credentials securely in Zuplo environment variables:
AWS_ACCESS_KEY_ID - Your AWS access key
AWS_SECRET_ACCESS_KEY - Your AWS secret key
AWS_REGION - Your S3 bucket region (for example, us-east-1)
AWS_S3_BUCKET - Your S3 bucket name
Installing Dependencies
If you are developing locally and want code completion, etc in your project,
install the AWS SDK for S3 to your project. These
dependencies](../programmable-api/node-modules.mdx) are already available in the
Zuplo runtime.
Create a new module in your Zuplo project that generates pre-signed URLs. This
handler accepts file metadata and returns a signed URL that clients can use to
upload directly to S3.
modules/s3-signed-url.ts
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";import { getSignedUrl } from "@aws-sdk/s3-request-presigner";import { ZuploContext, ZuploRequest, environment } from "@zuplo/runtime";interface UploadRequest { fileName: string; contentType: string; // Optional: add custom metadata fields metadata?: Record<string, string>;}interface UploadResponse { uploadUrl: string; key: string; expiresIn: number;}export default async function ( request: ZuploRequest, context: ZuploContext,): Promise<Response> { // Parse request body const body = (await request.json()) as UploadRequest; if (!body.fileName || !body.contentType) { return new Response( JSON.stringify({ error: "fileName and contentType are required", }), { status: 400, headers: { "content-type": "application/json" }, }, ); } // Configure S3 client const s3Client = new S3Client({ region: environment.AWS_REGION, credentials: { accessKeyId: environment.AWS_ACCESS_KEY_ID!, secretAccessKey: environment.AWS_SECRET_ACCESS_KEY!, }, }); // Generate a unique key for the file // Consider adding user ID or other identifiers to organize uploads const timestamp = Date.now(); const key = `uploads/${timestamp}-${body.fileName}`; // Create the put object command const command = new PutObjectCommand({ Bucket: environment.AWS_S3_BUCKET, Key: key, ContentType: body.contentType, Metadata: body.metadata, }); try { // Generate pre-signed URL that expires in 1 hour const expiresIn = 3600; const uploadUrl = await getSignedUrl(s3Client, command, { expiresIn }); const response: UploadResponse = { uploadUrl, key, expiresIn, }; return new Response(JSON.stringify(response), { status: 200, headers: { "content-type": "application/json" }, }); } catch (error) { context.log.error("Failed to generate signed URL", error); return new Response( JSON.stringify({ error: "Failed to generate upload URL", }), { status: 500, headers: { "content-type": "application/json" }, }, ); }}
Configuring the Route
Add a route in your routes.oas.json file to expose this handler:
Pre-signed URLs expire after the specified duration (default: 1 hour in the
example). Adjust the expiresIn parameter based on your needs:
Code
// Shorter expiration for sensitive uploadsconst expiresIn = 600; // 10 minutes// Longer expiration for large filesconst expiresIn = 7200; // 2 hours
File Organization
Consider organizing uploads by user or purpose to simplify management:
Code
// Organize by user and dateconst userId = request.user.sub; // From authenticationconst date = new Date().toISOString().split("T")[0];const key = `uploads/${userId}/${date}/${timestamp}-${body.fileName}`;
Content Type Validation
Validate file types before generating signed URLs:
While S3 can handle files up to 5TB, you may want to enforce size limits. Add
validation on the client side and consider implementing S3 bucket policies to
enforce maximum object sizes.
Advanced Features
Multipart Upload for Very Large Files
For files larger than 5GB, use multipart uploads. This requires generating
signed URLs for each part:
Code
import { CreateMultipartUploadCommand, UploadPartCommand,} from "@aws-sdk/client-s3";// Create multipart uploadconst multipartCommand = new CreateMultipartUploadCommand({ Bucket: environment.AWS_S3_BUCKET, Key: key, ContentType: body.contentType,});const multipartUpload = await s3Client.send(multipartCommand);const uploadId = multipartUpload.UploadId;// Generate signed URLs for each part// Client uploads each part separately, then completes the upload
Upload Notifications
Set up S3 event notifications to trigger actions when uploads complete:
Configure S3 bucket notifications to send events to SQS, SNS, or Lambda
Process uploaded files asynchronously
Update your database with file metadata
Run virus scanning or other validations
Pre-signed POST URLs
For browser uploads with additional security, use pre-signed POST URLs instead
of PUT:
Code
import { createPresignedPost } from "@aws-sdk/s3-presigned-post";const { url, fields } = await createPresignedPost(s3Client, { Bucket: environment.AWS_S3_BUCKET, Key: key, Conditions: [ ["content-length-range", 0, 10485760], // 10MB max ["starts-with", "$Content-Type", "image/"], ], Expires: 3600,});// Client submits multipart/form-data with the fields
Troubleshooting
CORS Issues
If clients receive CORS errors when uploading to S3, configure CORS on your S3
bucket: