DEV Community

Cover image for Optimize Your AWS Lambdas with TypeScript
Camilo Reyes for AppSignal

Posted on • Edited on • Originally published at blog.appsignal.com

Optimize Your AWS Lambdas with TypeScript

Last time in this series, we looked at improving the developer experience. As your Lambda function gains more features and dependencies, you may notice that the bundle size begins to grow exponentially. This can negatively affect deployments and cold starts.

A bigger bundle means deployments take longer to upload to the AWS cloud. The JavaScript engine has more work to do before it can execute the Lambda function.

In this take, we will focus on optimizing our Lambda function. We'll explore techniques to reduce the bundle size and minimize startup costs.

Ready? Let’s go!

The Current Situation

To verify the current bundle size, log in to AWS, click on the pizza-api Lambda function, and then check the code tab. With all the work we've done so far, the bundle size sits at around 40 megabytes. Any real-world solution can easily grow to hundreds of megabytes.

First, let’s measure our current situation by deploying the latest code to the AWS cloud. We can use a tool in PowerShell called Measure-Command to figure out how long this takes. Feel free to use a similar tool if you are on Linux/macOS, or simply use a stopwatch.

> Measure-Command { npm run update }
Enter fullscreen mode Exit fullscreen mode

For Linux/macOS, you can use the following command:

> time npm run update
Enter fullscreen mode Exit fullscreen mode

From my local machine, and pointing to the us-east-1 region with a decent internet connection, this update command runs for almost three minutes. That's likely because this is how long it takes to upload 40 megabytes, given my upload speed.

Claudia runs npm pack under the hood and creates a zip file that can be uploaded to the AWS cloud. Even with a compressed bundle, this still takes a few minutes to complete.

Next, fire a curl request to the GET pizza endpoint. Be sure to quit PowerShell or any benchmarking tool before proceeding, since we won’t need a timer on this command.

curl -X GET -i -H "Accept-Type: application/json" -H "Content-Type: application/json" https://API_GATEWAY_API.execute-api.us-east-1.amazonaws.com/latest/pizzas/Pepperoni-Pizza
Enter fullscreen mode Exit fullscreen mode

If you have been following along, you should have a pepperoni pizza already pre-made. Now, log in to AWS, and check the logs via CloudWatch under pizza-api. Look for an entry like the one below:

Duration: 553.07 ms Billed Duration: 554 ms Memory Size: 128 MB Max Memory Used: 79 MB Init Duration: 401.82 ms
Enter fullscreen mode Exit fullscreen mode

Keep this in mind for future reference: the startup costs were around half a second. This is what you’d owe every single time the Lambda function spins up for the first time.

Since this is the serverless cloud, there are no real guarantees on how long the VM stays up.

Claudia Optimizations

Claudia comes with an arsenal of command line flags to tackle these issues. We’ll focus on the following flags:

  • no-optional-dependencies - with this, optional dependencies in package.json won't be uploaded to the Lambda function
  • runtime - controls the Node runtime version that will be used and checks on AWS for available runtimes
  • arch - specifies the architecture used to execute the code
  • memory - this value must be a multiple of 64 MB and cannot be less than 128 MB

If your goal is to improve cold starts, we recommend setting the runtime to the latest version of Node available (nodejs16.x at the time of writing). The default architecture is x86, so it is okay to stick to this setting. Memory-hungry apps can also take longer to start up, so set this to the minimum threshold of 128.

Since we are also concerned about the bundle size, set the no-optional-dependencies flag in the command line tool. This helps shed a lot of weight during uploads.

With these command line flags in mind, open the package.json file, and change the update command.

{
  "update": "claudia update --cache-api-config apiConfig --no-optional-dependencies --runtime nodejs16.x --arch x86_64 --memory 128"
}
Enter fullscreen mode Exit fullscreen mode

An identical set of flags can also go on the create command, but we’ll leave this as an exercise for you.

TypeScript Optimization of AWS Lambdas

Next, we’ll tackle a couple of optimizations within the code itself.

Pop open the tsconfig.json file and make the following change:

{
  "target": "esnext"
}
Enter fullscreen mode Exit fullscreen mode

By default, the compiler assumes no support for async/await in Node. This adds bloat to the output code because the transpiler builds a state machine in JavaScript that returns a Promise. The output also declares a generator and replaces await with a yield. If you take a peek in the dist folder, you may notice the output JS files declare an awaiter at the top of the file. This happens to every single file that uses async/await.

By setting the target to ESNext, we tell the TypeScript compiler to handle async/await natively in JavaScript.

Because we have taken control of the runtime setting on AWS, it is safe to assume Node 16.x supports async/await. The TypeScript compiler works less, and the output loses weight.

API Payload Optimization

The API response itself carries extra baggage from DynamoDB. Take a look at your response body and notice how every field has a prefix.

{
  "ingredients": { "SS": ["cheese", "pepperoni", "tomato sauce"] },
  "url": { "S": "Pepperoni-Pizza" },
  "name": { "S": "Pepperoni Pizza" }
}
Enter fullscreen mode Exit fullscreen mode

Prefixes like SS and S are field types in DynamoDB. They tell the database to allow only the specified type declared in the prefix. For example, an S is for a string field, and SS is for a set of strings.

Marshaling nukes this extra weight and optimizes the payload in the API response.

Open PizzaDb and make the following changes:

import { unmarshall } from "@aws-sdk/util-dynamodb";

// Inside tastePizza, replace the return with this code
const pizza = unmarshall(pizzaItem);
const { ingredients } = pizza;

return { ...pizza, ingredients: [...ingredients] };
Enter fullscreen mode Exit fullscreen mode

Then, add one more dependency to this project:

npm i @aws-sdk/util-dynamodb --save-dev
Enter fullscreen mode Exit fullscreen mode

The spread operator converts a set type to an array. This is necessary because JSON.stringify does not support sets. You can verify this new behavior by running the unit tests. We’ll leave fixing the test as an exercise for you.

Optimizing AWS Lambdas with Webpack

Time for the pièce de résistance. Out of all the techniques used so far, Webpack will make vast improvements to reduce the bundle size.

Install the following NPM dev packages:

npm i webpack webpack-cli --save-dev
Enter fullscreen mode Exit fullscreen mode

Every bit counts — the goal here is to shake the dependency tree and only import what we need in the final bundle file.

To accomplish this goal, create a webpack.config.js file in the root folder. Webpack automatically knows to read this configuration from this location.

const path = require("path");

module.exports = {
  entry: [path.join(__dirname, "dist/api.js")],
  output: {
    path: path.join(__dirname, "pub"),
    filename: "bundle.js",
    libraryTarget: "commonjs",
  },
  target: "node",
  mode: "production",
};
Enter fullscreen mode Exit fullscreen mode

There are a couple of caveats. First, we instruct Webpack that this is server-side code via the libraryTarget. This uses commonjs to export the API object so Claudia can pick it up and deploy our Lambda function. Also, we must specify the entry point of our app and the output filename.

Because the code gets compiled by TypeScript, we can reference the transpiled output in the dist folder directly without introducing another transpiler like Babel.

Now, open the package.json file and change the scripts:

{
  "preupdate": "npm run bundle",
  "prebundle": "npm run type-check",
  "bundle": "webpack"
}
Enter fullscreen mode Exit fullscreen mode

Finally, change the claudia.json file — set the module property to pub/bundle.

Unfortunately, Claudia does not update the Lambda configuration on AWS with this new module setting. So, go to the pizza-api Lambda function on AWS, edit the runtime settings, and set the Handler to pub/bundle.proxyRouter. This is the new entry point of our Lambda function.

Because NPM dependencies are no longer necessary, we can declare all dependencies as optional. In package.json, simply rename dependencies to optionalDependencies.

The NPM Ignore File

Claudia uses npm pack to create the zipped bundle, and one effective way to explicitly tell it which files to exclude is via .npmignore.

Create the .npmignore file:

@types/
test/
dist/
roles/
node_modules/
claudia.json
tsconfig.json
webpack.config.js
Enter fullscreen mode Exit fullscreen mode

This excludes all extraneous folders and files from the bundle and drastically reduces the size before it uploads to the AWS cloud.

The Optimized Situation

Like before, let’s measure our deployment times:

> Measure-Command { npm run update }
Enter fullscreen mode Exit fullscreen mode

For Linux/macOS, use the following command:

> time npm run update
Enter fullscreen mode Exit fullscreen mode

This time, the deployment completes in under one minute, which is three times faster.

Next, fire a request to the same GET endpoint.

curl -X GET -i -H "Accept-Type: application/json" -H "Content-Type: application/json" https://API_GATEWAY_API.execute-api.us-east-1.amazonaws.com/latest/pizzas/Pepperoni-Pizza
Enter fullscreen mode Exit fullscreen mode

On AWS, check the logs via CloudWatch under pizza-api.

Duration: 548.26 ms Billed Duration: 549 ms Memory Size: 128 MB Max Memory Used: 66 MB Init Duration: 242.30 ms
Enter fullscreen mode Exit fullscreen mode

Note the startup cost and memory used. Our init duration goes from half a second down to a quarter of a second, which makes this twice as fast. The memory used is also lower, likely because the VM does not load megabytes of unused code.

Turns out, the same optimizations that we once applied on the browser can also be applied to the AWS serverless cloud. This is because the bundle size and cold starts are critical to JavaScript performance.

These two are similar because the JavaScript bundled code must spin up, execute, and then die with little overhead and in a sandbox environment.

Next Up: Secure Your AWS Lambdas

Optimizing our Lambda function is all about two things: reducing the bundle size and improving cold starts. What’s nice is that all the optimization techniques you are already familiar with on the browser can also be employed on the serverless cloud.

In the fourth and final part of this series, we’ll look at securing our Lambda function via Cognito.

Until next time!

P.S. If you liked this post, subscribe to our JavaScript Sorcery list for a monthly deep dive into more magical JavaScript tips and tricks.

P.P.S. If you need an APM for your Node.js app, go and check out the AppSignal APM for Node.js.

Top comments (0)