I remember a time when I had to remotely connect to a server using FTP to deliver a web application source code. Those were the "Wild West" days of my career where almost all of the code I wrote were untyped and the manual deployment process could bring down the production website with one wrong keystroke. 😱
Fast forward to the present, I have complete confidence in the code I write in Node.js and TypeScript, strongly typed domain objects modelled with Prisma, and one keystroke is all I need to deploy a feature to AWS Lambdas, all without a dedicated server. What a time to be alive!
In this post, I will talk about one of my favorite setups to go from almost zero to deploying Prisma integrated AWS Lambdas with shared Lambda layers.
🔍 Overview
- Prerequisites
- How it works
- Setting up the project
- Setting up Prisma
- Creating a Lambda
- Creating Lambda layers
- Deployment
- General notes
- Summary
🗝️ Prerequisites
To make full use of this guide, make sure you have the following items set up:
- AWS account - free-tier
- Serverless account - free-tier
- One AWS RDS database
- One AWS IAM user
🛠️ How it works
AWS Lambda and Lambda layers
An AWS Lambda is a computing service that allows you to run code without a server. A Lambda is normally small and has a size limit of 50MB. It may seem like a lot but if we build a Lambda with all the dependencies in node_modules
, it would easily go over the limit.
A good practice is to keep only the main business logic in the Lambda function. Keeping a Lambda small also means it takes less time to deploy.
All imports should be treated as externals i.e. as if they come from node_modules
. These externals usually come from Lambda layers. I like to split my layers into 3 ( you can use up to 5 layers ):
-
Normal runtime dependencies layer: This layer contains the runtime dependencies installed from package registries. These are typically declared in package.json's
dependencies
field. - Prisma Client layer: I like to keep Prisma-related dependencies as their own layer because the way they are generated is fairly different from others.
- Libs layer: This layer includes custom utilities that can be shared between Lambdas and other apps.
We will explore how to create these layers for Node.js Lambdas later in this post.
Prisma binary
Prisma is an ORM where a TypeScript client is generated based on a schema so consuming apps have type-safety when querying databases. It also creates binaries to run in different environments. We will need to generate different binaries to run in the dev environment and in a Lambda.
Serverless Framework
Serverless Framework is a service that does a lot of heavy lifting when it comes to deploying to AWS.
🏡 Setting up the project
Let's start with a minimal setup. It should have the following structure:
prisma/
scripts/
src/
-- lambdas/
-- libs/
package.json
Now, install the following:
$ yarn add -D prisma @prisma/client @types/node ts-node typescript
Create a tsconfig.json
that looks like this:
{
"compilerOptions": {
"lib": ["es2016", "esnext.asynciterable"],
"baseUrl": "./src",
"outDir": "./build",
"paths": {
"@libs/*": ["libs/*"]
},
"target": "ESNext",
"module": "commonjs",
"sourceMap": true,
"strict": true,
"skipLibCheck": true,
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"noUnusedLocals": true,
"noImplicitAny": true
},
"include": ["./src"],
"exclude": ["node_modules"]
}
The important setting here is @libs/*: ["libs/*"]
. This is used so we can use @libs
as the alias to import modules from src/libs
locally.
🥞 Setting up Prisma
Let's create a minimal Prisma schema:
// ./prisma/schema.prisma
datasource db {
provider = "mysql"
url = env("PRISMA_DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
binaryTargets = [env("PRISMA_BINARY_TARGET")]
}
model User {
id Int @id @default(autoincrement())
uuid String @unique
}
Notes:
- There will be a
User
table when migrations are run. -
env("PRISMA_DATABASE_URL")
is used to provide different Prisma URLs depending on the environment. -
env("PRISMA_BINARY_TARGET")
is used to generate and run different binaries depending on where we run the code.
The default value of binaryTargets
is native
. This means a Prisma Client will be generated based on the current operating system where the command is run. We will use this in dev. You can learn about the native binary target here.
We want to set this value to rhel-openssl-1.0.x
when we generate the Prisma Client to be used by the Lambda. You can find all of the available binaryTargets options here.
Docker is my first choice when it comes to separating environments but for this example, we will use a .env
file for simplicity:
// ./.env
# Database URL could be different depending on your setup:
PRISMA_DATABASE_URL=mysql://root:root@localhost/lambda_test?schema=public
# `native` is used in dev so it will generate
# the binary based on your current OS.
# `rhel-openssl-1.0.x` should be used for AWS lambda.
PRISMA_BINARY_TARGET=native
Running the following command should create a new database locally:
$ yarn prisma migrate dev
The rest of this guide assumes you have run the same migration on the prod RDS database. For more information, please refer to the Prisma migrate documentation. For this guide's purpose, you can export your dev database structure and content and import them into your RDS database.
Now, we can create a new lib to initialise a Prisma Client:
// ./src/libs/createPrismaClient.ts
import { PrismaClient } from "@prisma/client";
export const createPrismaClient = (): PrismaClient => {
const prisma = new PrismaClient();
return prisma;
};
We could directly do this in the Lambda but extracting this into a lib has many advantages:
- Consistent logic when creating Prisma Clients e.g. we might want to add a middleware to all Clients.
- Can be used by different services e.g. we can use this function in a Node.js web app, Lambdas, or tests.
👷 Creating a Lambda
Let's create a Lambda that when invoked will create a new user with a UUID into the database. Note that Prisma is able to create UUIDs for records natively but this will demonstrate how the runtime dependencies layer works.
First, let's install the packages to generate UUID:
$ yarn add uuid
$ yarn add -D @types/uuid
And here's the code for the Lambda:
// ./src/lambdas/insertUser/handler.ts
import { createPrismaClient } from "@libs/createPrismaClient";
import { v4 as uuidv4 } from "uuid";
const handler = async (): Promise<void> => {
const prisma = createPrismaClient();
try {
await prisma.user.create({
data: { uuid: uuidv4() },
});
} catch (e) {
console.error(e);
}
prisma.$disconnect();
};
export default handler;
Note that we should always run prisma.$disconnect();
at the end of every Lambda to ensure we do not hold connections to the database.
We will use TypeScript to compile this code:
$ yarn tsc
This will be compiled into the build
folder:
// ./build/lambdas/insertUser/handler.js
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
const prismaClient_1 = require("@libs/createPrismaClient");
const uuid_1 = require("uuid");
const handler = async () => {
const prisma = prismaClient_1.createPrismaClient();
try {
await prisma.user.create({
data: { uuid: uuid_1.v4() },
});
}
catch (e) {
console.error(e);
}
prisma.$disconnect();
};
exports.default = handler;
//# sourceMappingURL=handler.js.map
This file can be uploaded to AWS as a Lambda function. However, it's importing @libs/createPrismaClient
and uuid
from node_modules
. We must create the Lambda layers that hold these dependencies.
🧅 Creating Lambda layers
As mentioned above, there are 3 Lambda layers we want to create:
- Runtime dependencies layer
- Prisma layer
- Libs layer
A Lambda layer is a zip file that looks like this:
layer.zip
-- nodejs
-- node_modules
-- lib1
-- lib2
...
You can read more about Lambda layers here. We will need to create one such file for each layer:
// Runtime dependencies layer
lambda-layers-node_modules.zip
-- nodejs
-- node_modules
-- uuid
// Prisma layer
lambda-layers-prisma-client.zip
-- nodejs
-- node_modules
-- .prisma
-- @prisma
// Libs layer
lambda-layers-libs.zip
-- nodejs
-- node_modules
-- @libs
-- createPrismaClient.js
Let's create 3 separate scripts that can prepare these layers for us.
Runtime dependencies layer
This script should be in ./scripts/ci/prepare-node-modules-lambda-layer.sh
#!/bin/bash
function prepare_node_modules_lambda_layer() {
echo "Cleaning up workspace ..."
rm -rf lambda-layers-node_modules
echo "Creating layer ..."
mkdir -p lambda-layers-node_modules/nodejs
echo "Prepare server node_modules lambda layer ..."
cp -r node_modules lambda-layers-node_modules/nodejs
echo "Compressing ..."
pushd lambda-layers-node_modules && tar -zcf /tmp/nodejs.tar.gz . && mv /tmp/nodejs.tar.gz ./nodejs.tar.gz
echo "Remove unzipped files ..."
rm -rf nodejs
echo "Stats:"
ls -lh nodejs.tar.gz
popd
}
prepare_node_modules_lambda_layer
Note: this script creates a zip file that contains everything in node_modules
so if you generated a Prisma Client here, it will be included in this layer. This script is intended to be used in CI/CD in a step where we only install production package.json dependencies.
Prisma layer
This script should be in ./scripts/ci/prepare-prisma-client-lambda-layer.sh
#!/bin/bash
function prepare_prisma_client_lambda_layer() {
echo "Cleaning up workspace ..."
rm -rf lambda-layers-prisma-client
echo "Creating layer ..."
mkdir -p lambda-layers-prisma-client/nodejs/node_modules/.prisma
mkdir -p lambda-layers-prisma-client/nodejs/node_modules/@prisma
echo "Prepare Prisma Client lambda layer ..."
cp -r node_modules/.prisma/client lambda-layers-prisma-client/nodejs/node_modules/.prisma
cp -r node_modules/@prisma lambda-layers-prisma-client/nodejs/node_modules
echo "Remove Prisma CLI..."
rm -rf lambda-layers-prisma-client/nodejs/node_modules/@prisma/cli
echo "Compressing ..."
pushd lambda-layers-prisma-client && tar -zcf /tmp/nodejs.tar.gz . && mv /tmp/nodejs.tar.gz ./nodejs.tar.gz
echo "Remove unzipped files ..."
rm -rf nodejs
echo "Stats:"
ls -lh nodejs.tar.gz
popd
}
prepare_prisma_client_lambda_layer
When this script is run, it will create a Lambda layer with the .prisma
and @prisma
directories:
-
@prisma
is where the generators and wiring happen. -
.prisma
contains the generated TypeScript interfaces.
It also removes @prisma/cli
to keep the layer smaller since we won't be running commands in the Lambda. In later versions of Prisma ( >=2.16 ), this package is no longer needed so you can omit this line.
This should be run in CI/CD after the Prisma packages have been installed and the Prisma Client generated.
Libs layer
This script should be in ./scripts/ci/prepare-libs-lambda-layer.sh
#!/bin/bash
function prepare_libs_lambda_layer() {
echo "Cleaning up ..."
rm -rf lambda-layers-libs
echo "Creating layer ..."
mkdir -p lambda-layers-libs/nodejs/node_modules/@libs
mv build/libs build/@libs
echo "Prepare libs lambda layer ..."
cp -r build/@libs lambda-layers-libs/nodejs/node_modules
echo "Compressing ..."
pushd lambda-layers-libs && tar -zcf /tmp/nodejs.tar.gz . && mv /tmp/nodejs.tar.gz ./nodejs.tar.gz
echo "Remove unzipped files ..."
rm -rf nodejs
echo "Stats:"
ls -lh nodejs.tar.gz
popd
}
prepare_libs_lambda_layer
This script should be run after we have compiled the libs using yarn tsc
. Note that this layer will be built as build/libs
but we rename it into node_modules/@libs
to match the module import path that we have in the Lambda.
☁️ Deployment
At this point, we could manually upload the Lambda and its layers' zips to AWS but it would take forever to do it this way. I got you fam, don't worry. 😉
We'll set up some CI/CD goodness in this section.
Serverless
This is where Serverless Framework comes in. We can deploy everything with one command. Create a serverless yml like this:
# ./serverless.yml
service: prisma-aws-lambda-deployment
provider:
name: aws
runtime: nodejs12.x
stage: prod
region: ${env:AWS_REGION}
# vpc:
# securityGroupIds:
# - # FILLME
# subnetIds:
# - # FILLME
# - # FILLME
layers:
TopicPrismaAwsNodeModules:
path: lambda-layers-node_modules
TopicPrismaAwsLibs:
path: lambda-layers-libs
TopicPrismaAwsPrismaClient:
path: lambda-layers-prisma-client
functions:
insertUserCron:
handler: insertUser/handler.default
memorySize: 512
timeout: 290 # 4 minutes 50 seconds
events:
- schedule: rate(5 minutes)
environment:
NODE_ENV: production
PRISMA_DATABASE_URL: ${env:PRISMA_DATABASE_URL}
PRISMA_BINARY_TARGET: ${env:PRISMA_BINARY_TARGET}
layers:
- { Ref: TopicPrismaAwsNodeModulesLambdaLayer }
- { Ref: TopicPrismaAwsLibsLambdaLayer }
- { Ref: TopicPrismaAwsPrismaClientLambdaLayer }
In this file, we basically deploy the insertUser
Lambda with all the layers attached as a cron and run it every 5 mins. Remember to set your VPC settings so your Lambda can send requests to the RDS instance!
Other ways to invoke a Lambda function:
- Send a request to the Lambda URL
- Execute the AWS CLI invoke command
- Use the test feature in the AWS Lambda UI
Also, this file is intended to be run in the build/
directory. You will see an error if you try to run this from the project root. Don't worry, it'll make sense in CI/CD.
Github action
In the previous sections, I have been alluding to deploying in CI/CD. We will be using GitHub action but you can do this in any other providers:
- Build runtime dependencies layer
- Build Prisma layer
- Build libs layer
- Build Lambda(s)
- Once all the previous steps are done, download the artifacts and deploy
Here's the GitHub action file for it:
# ./.github/workflows/deploy-lambdas.yml
name: Deploy lambdas
on:
push:
branches:
- "master"
jobs:
build-node_modules-lambda-layer:
name: Bld. node_modules layer
runs-on: ubuntu-18.04
steps:
- name: Check out repository
uses: actions/checkout@v2
- name: Set up Node.js 12.x
uses: actions/setup-node@v1
with:
node-version: 12.x
# Only install PROD packages i.e. no `@types/*` packages or dev-related packages
- name: Install PROD packages
run: yarn --production
- name: Prepare lambda layer
run: ./scripts/ci/prepare-node-modules-lambda-layer.sh
- uses: actions/upload-artifact@v2
with:
name: lambda-layers-node_modules
path: ./lambda-layers-node_modules
build-prisma-client-lambda-layer:
name: Bld. @prisma/client layer
runs-on: ubuntu-18.04
steps:
- name: Check out repository
uses: actions/checkout@v2
- name: Set up Node.js 12.x
uses: actions/setup-node@v1
with:
node-version: 12.x
- name: Install ALL packages
run: yarn --frozen-lockfile
# Generate Prisma Client and binary that can run in a lambda environment
- name: Prepare prisma client
run: yarn PRISMA_BINARY_TARGET=rhel-openssl-1.0.x prisma generate
- name: Prepare "@prisma/client" lambda layer
run: ./scripts/ci/prepare-prisma-client-lambda-layer.sh
- uses: actions/upload-artifact@v2
with:
name: lambda-layers-prisma-client
path: ./lambda-layers-prisma-client
build-libs-lambda-layers:
name: Bld. @libs layer
runs-on: ubuntu-18.04
steps:
- name: Check out repository
uses: actions/checkout@v2
- name: Set up Node.js 12.x
uses: actions/setup-node@v1
with:
node-version: 12.x
- name: Install ALL packages
run: yarn --frozen-lockfile
- name: Prepare prisma client
run: yarn PRISMA_BINARY_TARGET=rhel-openssl-1.0.x prisma generate
- name: Build assets
run: yarn tsc
- name: Prepare "@libs/*"" lambda layer
run: ./scripts/ci/prepare-libs-lambda-layer.sh
- uses: actions/upload-artifact@v2
with:
name: lambda-layers-libs
path: ./lambda-layers-libs
build-lambdas:
name: Build lambdas
runs-on: ubuntu-18.04
steps:
- name: Check out repository
uses: actions/checkout@v2
- name: Set up Node.js 12.x
uses: actions/setup-node@v1
with:
node-version: 12.x
- name: Install ALL packages
run: yarn --frozen-lockfile
- name: Prepare prisma client
run: yarn PRISMA_BINARY_TARGET=rhel-openssl-1.0.x prisma generate
- name: Build lambdas
run: yarn tsc
- uses: actions/upload-artifact@v2
with:
name: build-lambdas
path: ./build/lambdas
deploy-lambdas:
name: Deploy lambdas
needs:
[
build-node_modules-lambda-layer,
build-prisma-client-lambda-layer,
build-libs-lambda-layers,
build-lambdas,
]
runs-on: ubuntu-18.04
steps:
- name: Check out repository
uses: actions/checkout@v2
- name: Set up Node.js 12.x
uses: actions/setup-node@v1
with:
node-version: 12.x
- uses: actions/download-artifact@v2
with:
name: build-lambdas
path: ./build/lambdas
- uses: actions/download-artifact@v2
with:
name: lambda-layers-node_modules
path: ./build/lambdas/lambda-layers-node_modules
- uses: actions/download-artifact@v2
with:
name: lambda-layers-libs
path: ./build/lambdas/lambda-layers-libs
- uses: actions/download-artifact@v2
with:
name: lambda-layers-prisma-client
path: ./build/lambdas/lambda-layers-prisma-client
- name: Unzip layers
run: |
tar -C ./build/lambdas/lambda-layers-node_modules -xf ./build/lambdas/lambda-layers-node_modules/nodejs.tar.gz
rm -rf ./build/lambdas/lambda-layers-node_modules/nodejs.tar.gz
tar -C ./build/lambdas/lambda-layers-libs -xf ./build/lambdas/lambda-layers-libs/nodejs.tar.gz
rm -rf ./build/lambdas/lambda-layers-libs/nodejs.tar.gz
tar -C ./build/lambdas/lambda-layers-prisma-client -xf ./build/lambdas/lambda-layers-prisma-client/nodejs.tar.gz
rm -rf ./build/lambdas/lambda-layers-prisma-client/nodejs.tar.gz
- name: Move serverless.yml
run: mv serverless.yml ./build/lambdas/serverless.yml
- name: Deploy lambdas and layers
uses: aaronpanch/action-serverless@master
with:
args: deploy --debug
env:
SERVICE_ROOT: ./build/lambdas
SERVERLESS_ACCESS_KEY: ${{ secrets.SERVERLESS_ACCESS_KEY }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_CI }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_CI }}
AWS_REGION: ${{ secrets.AWS_REGION }}
PRISMA_DATABASE_URL: ${{ secrets.PRISMA_DATABASE_URL }}
PRISMA_BINARY_TARGET: rhel-openssl-1.0.x
Notes:
-
rhel-openssl-1.0.x
is the AWS Lambda binary target for Prisma Client. -
serverless.yml
is moved into./build/lambdas/serverless.yml
to make sure it only compresses the built Lambda folder. - Serverless zips layers automatically so we need to unzip the layers before running the
serverless deploy
command. -
secrets.*
values can be set using GitHub action secrets -
SERVERLESS_ACCESS_KEY
can be created through the Serverless dashboard -
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
can be created when you create an IAM user in AWS
You will see something like this in the "Actions" tab of your Github repo if everything went according to plan:
And... that's it! If the gods of AWS are on your side, you'll start seeing new users being inserted into your User
table with Prisma! 🎉
📚 General notes
- Make sure to secure your VPC, database, IAM, etc. When I wrote this post, I made everything public to make it easy to test. My database was hacked with a ransom note saying I should pay 1 Bitcoin to recover it! Who would have thought
root
andpassword123
as credentials is an open invitation to evil hackers?! 🤭 - You can find your Lambda logs in AWS CloudWatch of your chosen region
- If you want to add a new Lambda using this setup, you can create the Lambda in the
src/lambdas
directory and updateserverless.yml
- Always take care of your keys and secrets and do not store them in plain text in your app or config files.
- Always set
?connection_limit=1
to the Prisma database URL if you are planning to use it in a Lambda to avoid exhausting the connection pool. Read more about the recommended connection limit
✌️ Summary
I hope you enjoyed reading this post! If you have tips & tricks regarding any of the topics discussed or general feedback, don't be shy to send me a holler at @eddeee888
The full repo with working CI/CD can be found here: https://github.com/eddeee888/topic-prisma-aws-lambda-deployment
Top comments (9)
I'm having the following error
Run yarn prisma:generate:prod
yarn prisma:generate:prod
shell: /bin/bash -e {0}
yarn run v1.22.10
$ PRISMA_BINARY_TARGET=rhel-openssl-1.0.x prisma generate
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Error: Unexpected token r in JSON at position 0
error Command failed with exit code 1.
Please help me
I solved it by typing the string directly into the schema.prisma file rather than using an environment variable. I have a docker localstack environment set up so I will never need my native binary for it, so it works for me.
`datasource db {
provider = "sqlserver"
url = env("PRISMA_DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
binaryTargets = ["rhel-openssl-1.0.x"]
}`
actions are failing to execute command "yarn prism:generate:prod".
I'm having the following error
Run yarn prisma:generate:prod
yarn prisma:generate:prod
shell: /bin/bash -e {0}
yarn run v1.22.10
$ PRISMA_BINARY_TARGET=rhel-openssl-1.0.x prisma generate
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Error: Unexpected token r in JSON at position 0
error Command failed with exit code 1.
Please help me
Hello @macbird !
I have also encountered this error between Prisma versions 2.23 to 2.25.
If you are on Prisma version 2.26, you can do something like this:
Hope it works for you!
Here's the related issue on Prisma GitHub: github.com/prisma/prisma/issues/8112
how do you test lambdas layers locally, since localstack in the free version doesn't support layers, is there any other puglin you use?
If your lambda is not bind to a http event you can use a command like the following:
npx serverless invoke local -s dev --function hello --path ./samples/hello.sample.json
where hello.sample.jon is a json file with the payload
Have you tried with serverless-offline? There is like a "workaround" for layers in local, with docker. Here is the link and a small preview of the docs:
Hope this helps you!
Regards!
Hey Eddy, will using lambda layers allow me to see my lambda code on aws? Cause when I have prisma zipped with my code, aws says deployment is too big.
Thanks
Hi @mistvan ! Sorry for the late response, I just stteped out with this article researching about how deploying lambda layers with serverless framework.
The answer to your question is YES, it will. As the size of the zip code will decrease, you will be able to see the code of the lambda in the AWS console.
I'm having the same escenario as you. Hopefully I will be able to solve it soon :D
Best regards!