DEV Community

Luis Valdés
Luis Valdés

Posted on • Originally published at valdes.com.br on

How to generate a presigned url to upload images to S3

Cloud computing

In this post I want to share an implementation on how to create presigned url to upload files to an S3 bucket, the piece of technology we are going to use is an aws library that provides the mechanism needed to upload a file to an S3 bucket by performing a POST request with a specific body and url

Requirements

  • git
  • NodeJS 14 or later, my version is v18.18.0
  • An AWS account and configured credentials
  • Install cdk command
  • docker
  • Basic knowledge of TypeScript
  • AWS Lambda user

TL;DR;

Clone the repo and follow the instructions to deploy the project, you can use the gitpod configuration which comes with nodejs, aws cli v2, docker and cdk installed

New structure of our graphql schema

We are ignoring the resources not related to image upload, so in this case we have the query getImageUploadUrl which returns a PresignedImageUrl.

input ImageInput {
  filename: String!
  contentType: String!
}

type Query {
  getImageUploadUrl(input: ImageInput!): PresignedImageUrl!
}

type PresignedField {
  name: String!
  value: String!
}

type PresignedImageUrl {
  id: ID!
  fields: [PresignedField!]!
  url: String!
}
Enter fullscreen mode Exit fullscreen mode

Create an S3 bucket for the images

In our current scenario we need to create a bucket with specific configurations, we need to allow objects to be public and we also need to declare the cors configuration, we set the owner of the file as the object writer, this bucket is setup to allow files to be public by disabling blockPublicAcls

const assetsBucket = new s3.Bucket(this, 'AssetsBucket',
 {
  objectOwnership: s3.ObjectOwnership.OBJECT_WRITER,
  blockPublicAccess: new s3.BlockPublicAccess({ blockPublicAcls: false }),
  cors: [
    {
      id: "corsRule",
      allowedMethods: [s3.HttpMethods.GET, s3.HttpMethods.POST, s3.HttpMethods.PUT],
      allowedHeaders: ['*'],
      allowedOrigins: ['*'],
      exposedHeaders: [
        "Access-Control-Allow-Origin"
      ]
    } as s3.CorsRule
  ]
 }
);
Enter fullscreen mode Exit fullscreen mode

Images Table

As we did in our previous article, we are going to create a table with a global secondary index that will allow us to filter records by owner

    const cfnImagesTable = new dynamodb.CfnTable(this, 'CfnImagesTable', {
      keySchema: [{
        attributeName: 'id',
        keyType: 'HASH',
        {
          attributeName: 'createdAt',
          keyType: 'RANGE',
        }
      }],
      attributeDefinitions: [
        {
          attributeName: 'id',
          attributeType: 'S',
        },
        {
          attributeName: 'owner',
          attributeType: 'S',
        },
        {
          attributeName: 'createdAt',
          attributeType: 'S',
        }
      ],
      billingMode: 'PAY_PER_REQUEST',
      globalSecondaryIndexes: [
        {
          indexName: 'byOwner',
          keySchema: [
            {
              attributeName: 'owner',
              keyType: 'HASH',
            },
            {
              attributeName: 'id',
              keyType: 'RANGE',
            }
          ],
          projection: {
            projectionType: 'ALL',
          },
        }
      ],
    });
Enter fullscreen mode Exit fullscreen mode

Create the get image upload url function

In order to configure our function, we will use the NodejsFunction construct. In our scenario we are adding the library ulid to generate the id of our records. Another thing to point out is that we are providing environment variables for out images table and our assets bucket name

const getImageUploadUrl = new lambda_nodejs.NodejsFunction(this, "GetPresignedImageUrlLambdaFunction", {
  entry: path.join(__dirname, '../functions/createPresignedPost/index.ts'),
  bundling: {
    nodeModules: ['ulid'],
  },
  projectRoot: path.join(__dirname, '../functions/createPresignedPost'),
  depsLockFilePath: path.join(__dirname, '../functions/createPresignedPost/package-lock.json'),
  handler: 'getImageUploadUrl',
  runtime: lambda_.Runtime.NODEJS_20_X,
  environment: {
    IMAGES_TABLE: imagesTable.tableName,
    ASSETS_BUCKET: assetsBucket.bucketName,
  },
  role: lambdaRole,
  timeout: cdk.Duration.seconds(30)
})
Enter fullscreen mode Exit fullscreen mode

What the source code of the function looks like?

This article assumes you have basic level of TypeScript and AWS Lambda, so we can import all the needed modules and consider them to be self explanatory.

We then declare the clients for S3 and DynamoDB, also we create a type for our input arguments that are coming from the graphql query.

The text content-length-range is a condition for the input file to be from 1024 bytes to 10485760, which is 10 megabytes maximum size for the file.

We also insert a new record in the images table with the details of the key of the image and the url

import { AppSyncResolverEvent , AppSyncIdentityCognito} from "aws-lambda";
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { PutCommand, DynamoDBDocumentClient } from "@aws-sdk/lib-dynamodb";
import { PresignedPostOptions, createPresignedPost } from "@aws-sdk/s3-presigned-post";
import { S3Client } from "@aws-sdk/client-s3";
import { ulid } from 'ulid'
import * as path from 'path';
import { Conditions } from "@aws-sdk/s3-presigned-post/dist-types/types";

const s3Client = new S3Client({ region: "us-east-1" });
const client = new DynamoDBClient({});
const documentClient = DynamoDBDocumentClient.from(client);

type InputArguments = {
  input: {
    filename: string
    contentType: string
  }
}

export const getImageUploadUrl = async (event: AppSyncResolverEvent<InputArguments> ) => {

  const id = ulid()
  const createdAt = new Date().toJSON()
  const identity = event.identity as AppSyncIdentityCognito

  const extension = path.extname(event.arguments.input.filename)
  const Bucket = process.env.ASSETS_BUCKET || ""
  const Key = `uploaded-images/${id}${extension}`

  const conditions: Conditions[] = [
    ["starts-with", "$Content-Type", "image/"],
    ["content-length-range", 1024, 10485760],
  ]
  const Fields = {
    "Content-Type": event.arguments.input.contentType
  };
  const presignedPostOptions: PresignedPostOptions = {
    Bucket,
    Key,
    Conditions: conditions,
    Fields,
    Expires: 600
  }
  const { url, fields } = await createPresignedPost(s3Client, presignedPostOptions);

  const newFields = Object.keys(fields).map(fieldName => ({name: fieldName, value: fields[fieldName]}))

  const data = {
    id,
    owner: identity.username,
    url,
    fields: newFields,
    key: Key,
    status: "waiting_upload",
    createdAt
  }

  const command = new PutCommand({
    TableName: process.env.IMAGES_TABLE,
    Item: data
  });

  await documentClient.send(command)

  const result = {
    id,
    url,
    fields: newFields
  }

  return result
}
Enter fullscreen mode Exit fullscreen mode

We are using the library @aws-sdk/s3-presigned-post, from this library we import a function that returns structure of the data. The data is a struct with url and fields, these values are going to be needed in a frontend application, let say some like inside an input tag

<label htmlFor='image' >Upload image</label>
<input id="image"
      type="file"
      name="image"
      onChange={handleFileChange} />
</div>
Enter fullscreen mode Exit fullscreen mode

A quick and dirty implementation of the handler, in a future post I’ll share the details of the front end application

const handleFileChange = async (event: React.ChangeEvent<HTMLInputElement> ) => {
     const selectedFile = event.target.files![0]
     const filename = selectedFile.name
     const contentType = selectedFile.type
     let response = await amplifyClient.graphql<GraphQLQuery<GetImageUploadUrlQuery>>({
       query: queries.getImageUploadUrl,
       variables: { input: {
         filename,
         contentType
       } },
     });
     const presignedImage = response.data?.getImageUploadUrl!
     const url = presignedImage.url;
     const imageFile = selectedFile;
     const formData = new FormData();
     presignedImage.fields.forEach((item) => {
       formData.append(item.name, item.value);
     });
     formData.append('file', imageFile);
     try {
      response = await axios.post(url!, formData, {
        headers: {'Content-Type': 'multipart/form-data'},
      });
    } catch (e) {
      console.log(e);
      alert("There was an error while uploading the image");
      return
    }
}
Enter fullscreen mode Exit fullscreen mode

Create datasource and resolver

We first need to create a lambda data source using our graphql api and our get image upload url resources as parameters, once we create our data source we call the method createResolver, in this case that is a standard implementation for a lambda data source resolver for a query

const getImageUploadUrlDataSource = new appsync.LambdaDataSource(this, "GetImageUploadUrlDataSource", {
  api: graphqlApi,
  lambdaFunction: getImageUploadUrl
})

getImageUploadUrlDataSource.createResolver("GetImageUploadUrlResolver", {
  typeName: 'Query',
  fieldName: 'getImageUploadUrl',
  requestMappingTemplate: appsync.MappingTemplate.lambdaRequest(),
  responseMappingTemplate: appsync.MappingTemplate.lambdaResult()
})
Enter fullscreen mode Exit fullscreen mode

Next Steps

  • Image moderation and image recognition

Top comments (0)