This simple tutorial will show how to deploy any NodeJS application on AWS cloud with AWS Cloud Development Kit usage. Our application will use the Postgress database, but code from this tutorial can be a basis for deploying any database with your application.
I will not cover the AWS CDK basics since there are plenty of good resources, which explain everything from scratch and show how to bootstrap your AWS CDK project.
If you need to check the basics, here are some good sources:
What is AWS CDK (Cloud Development Kit) and why it's awesome
AWS CDK repo
Here I specify what we are going to do:
- Create secrets using AWS Secret Manager and read them from our custom stack
- Create an RDS stack with database definition
- Create ElasticBeanstalk stack for application deployment
- Create VPC stack and connecting everything
Note: This tutorial is inspired by two other posts. Without them, it would take me much longer to figure out everything:
I tell you a secret: Provide Database credentials to an ECS Fargate task in AWS CDK
Complete AWS Elastic Beanstalk Application through CDK (TypeScript)
So without further ado, let's get started!
Create secrets in the AWS Secret Manager
Go to your AWS Console and search for Secret Manager service and create two secrets to store your username and password for the database connection. AWS suggests you keep their naming conventions, so let's use prod/service/db/user
as a name for user secret and prod/service/db/password
as a name for the password.
Once you create those secrets, keep the ARN, which you will get back. They will be required to set up our connection.
Create stack for keeping credentials
Let's create a file called lib/credentials-stack.ts
in which we will read credentials that were saved in Secret Manager.
import * as cdk from "@aws-cdk/core";
import { ISecret, Secret } from "@aws-cdk/aws-secretsmanager";
export interface Credentials {
username: ISecret;
password: ISecret;
}
export class CredentialsStack extends cdk.Stack {
readonly credentials: { username: ISecret; password: ISecret };
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const secretUsername = Secret.fromSecretCompleteArn(
this,
"BackendPersistenceUsername",
// Pass your username secret ARN
""
);
const secretPassword = Secret.fromSecretCompleteArn(
this,
"BackendPersistencePassword",
// Pass your password secret ARN
""
);
this.credentials = {
username: secretUsername,
password: secretPassword,
};
}
}
We have made a new stack in which we read secrets required for connecting to our database and keep them in the credentials
property attached to this stack. Later on, we will be able to pass those credentials to other stacks.
Create RDS stack with Postgress database
Now we need to create a stack that will hold definitions for our Postgress database. For that, let's create a file called lib/rds-stack.ts
.
import * as cdk from "@aws-cdk/core";
import * as ec2 from "@aws-cdk/aws-ec2";
import * as rds from "@aws-cdk/aws-rds";
import { Credentials } from "./credentials-stack";
export interface RdsStackProps extends cdk.StackProps {
credentials: Credentials;
vpc: ec2.Vpc;
}
export class RdsStack extends cdk.Stack {
readonly postgreSQLinstance: rds.DatabaseInstance;
constructor(scope: cdk.Construct, id: string, props: RdsStackProps) {
super(scope, id, props);
const username = props.credentials.username.secretValue.toString();
const password = props.credentials.password.secretValue;
this.postgreSQLinstance = new rds.DatabaseInstance(this, "Postgres", {
engine: rds.DatabaseInstanceEngine.postgres({
version: rds.PostgresEngineVersion.VER_12_4,
}),
instanceType: ec2.InstanceType.of(
ec2.InstanceClass.T2,
ec2.InstanceSize.MICRO
),
vpc: props.vpc,
vpcPlacement: {
subnetType: ec2.SubnetType.PUBLIC,
},
storageType: rds.StorageType.GP2,
deletionProtection: false,
databaseName: username,
port: 5432,
credentials: {
username,
password,
},
});
this.postgreSQLinstance.connections.allowDefaultPortFromAnyIpv4();
this.postgreSQLinstance.connections.allowDefaultPortInternally();
}
}
Since any database in AWS must always be created in the scope of some VPC, we have defined an interface for props to our stack and specified that vpc
must be passed when instantiating this stack. Also, we will need to pass credentials, which we keep in credentials-stack
.
This Postgress instance, which we have defined, uses a basic T2 MICRO
instance, and is placed in public scope - our database will be reachable from the internet. Please note that we allow connections by invoking special methods (allowDefaultPortFromAnyIpv4
and allowDefaultPortInternally
) on our instance.
Creating deployment with ElasticBeanstalk
We can then create a stack responsible for copying our application files to S3 and then deploying it to ElasticBeanstalk service. Let's create a file called lib/ebs-stack.ts
and paste the code presented below.
import * as cdk from "@aws-cdk/core";
import * as EB from "@aws-cdk/aws-elasticbeanstalk";
import * as S3Assets from "@aws-cdk/aws-s3-assets";
import { Credentials } from "./credentials-stack";
export interface EbsStackProps extends cdk.StackProps {
dbCredentials: Credentials;
dbHost: string;
dbPort: string;
dbName: string;
}
export class EbsStack extends cdk.Stack {
constructor(scope: cdk.App, id: string, props: EbsStackProps) {
super(scope, id, props);
const username = props.dbCredentials.username.secretValue.toString();
const password = props.dbCredentials.password.secretValue;
// Here you can specify any other ENV variables which your application requires
const environmentVariables: Record<string, any> = {
POSTGRES_USER: username,
POSTGRES_PASSWORD: password,
POSTGRES_DB: props.dbName,
DB_HOST: props.dbHost,
DB_PORT: props.dbPort,
DB_SCHEMA: username,
};
const environmentOptions = Object.keys(environmentVariables).map(
(variable) => {
return {
namespace: "aws:elasticbeanstalk:application:environment",
optionName: variable,
value: environmentVariables[variable],
};
}
);
const applicationName = "Server";
const assets = new S3Assets.Asset(this, `${applicationName}-assets`, {
// Change path to your applications dist files
// In my case I've created a monorepo, so path was like ../server/dist
path: "path/to/your/application/dist",
exclude: ["node_modules",],
});
const application = new EB.CfnApplication(this, `${applicationName}-app`, {
applicationName,
});
const appVersionProps = new EB.CfnApplicationVersion(
this,
`${applicationName}-version`,
{
applicationName,
sourceBundle: {
s3Bucket: assets.s3BucketName,
s3Key: assets.s3ObjectKey,
},
}
);
const options: EB.CfnEnvironment.OptionSettingProperty[] = [
{
namespace: "aws:autoscaling:launchconfiguration",
optionName: "IamInstanceProfile",
value: "aws-elasticbeanstalk-ec2-role",
},
{
namespace: "aws:ec2:instances",
optionName: "InstanceTypes",
value: "t3.small",
},
];
new EB.CfnEnvironment(this, `${applicationName}-environment`, {
environmentName: "develop",
applicationName: application.applicationName || applicationName,
solutionStackName: "64bit Amazon Linux 2 v5.2.3 running Node.js 12",
optionSettings: [...options, ...environmentOptions],
versionLabel: appVersionProps.ref,
});
appVersionProps.addDependsOn(application);
}
}
The first step is to create an S3 bucket, including the source files for our application. This S3 logic fires before the CloudFormation template is acted upon to be available for EBS.
Then, the environment for the application is created, and the application is assigned to it. We also specify the version for our application (addDependsOn
), which is unique for the uploaded source files.
Create VPC stack and connect all the stacks
VPC is like a private network in the scope of our services that can communicate with each other. Any database in AWS must always be created in the scope of some VPC, so let's define a stack for that. Create a file called lib/vpc-stack.ts
. This one will be pretty short:
import * as cdk from "@aws-cdk/core";
import * as ec2 from "@aws-cdk/aws-ec2";
export class VpcStack extends cdk.Stack {
readonly vpc: ec2.Vpc;
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
this.vpc = new ec2.Vpc(this, "VPC");
}
}
We have created a new, default VPC instance and assigned it to vpc
property on VpcStack
.
Now, as we have all of the parts ready, we can connect it by creating an executable stack in bin/infrastructure-stack.ts
#!/usr/bin/env node
import * as cdk from "@aws-cdk/core";
import { EbsStackProps, EbsStack } from "../lib/ebs-stack";
import { CredentialsStack } from "../lib/credentials-stack";
import { RdsStack } from "../lib/rds-stack";
import { VpcStack } from "../lib/vpc-stack";
const app = new cdk.App();
const vpcStack = new VpcStack(app, "VpcStack");
const vpc = vpcStack.vpc;
const credentialsStack = new CredentialsStack(
app,
"CredentialsStack"
);
const rdsStack = new RdsStack(app, "RdsStack", {
credentials: credentialsStack.credentials,
vpc,
});
const dbInstance = rdsStack.postgreSQLinstance;
const ebsEnvironment: EbsStackProps = {
dbCredentials: credentialsStack.credentials,
dbName: credentialsStack.credentials.username.secretValue.toString(),
dbHost: dbInstance.instanceEndpoint.hostname.toString(),
dbPort: "5432",
};
new EbsStack(app, "EbsStack", ebsEnvironment);
We import all of our custom stacks and create instances of VpcStack
and CredentialsStack
. Then we can create a new database instance using the RdsStack
. Do not forget to pass VPC and credentials as props. We can then create an EbsStack
instance and pass every environment variable for the database connection.
With some luck, running yarn build && cdk deploy --all
will have your application packaged and deployed to CloudFormation. In there, you can verify that ElasticBeanstalk and RDS services were created and are running correctly.
Thanks for reading, and feel free to contact me!
Top comments (2)
it is awesome
this helped me a lot! Thanks!
The syntax changed here and there, and this was very helpful for me anyway for connecting the dots. Thanks again!