DEV Community

Ryan Dsouza
Ryan Dsouza

Posted on • Edited on

Deploy a Node app to AWS ECS with Dynamic Port mapping

Note: There are a couple of pre-requisites required for this to work.

  1. AWS CLI to push your docker app to the AWS repository. Install it and setup your credentials using the aws configure command.
  2. Docker Community Edition for building your app image.
  3. I have used Node so node and npm is required, but you can use any backend of your choice like Python or Go and build your Docker image accordingly.

I personally love Docker. It's a beautiful way to deploy your app to production. And the best part being you can test your production app in the same environment on your local machine as well!

This picture sums it all up :)

The birth of Docker

Today I will show you how to deploy your Node app bundled in a Docker image via AWS ECS (Elastic Container Service).

Note: I recommend that you try this on a paid AWS account that you are currently using in production or in your work environment. But if you are on a free-tier, please just read this tutorial as you go because creating these services will cost you money!!!

Now that I have warned you, let's login into the AWS console and select ECS.

Select ECS from the AWS service list

This will take you to the following page. Do watch the introductory video, it's awesome!

The AWS ECS home page

We are now interested in the list on the left. First of all, we need to create a repository. A repository in AWS is similar to the one in Docker Hub where we have all sorts of images like MongoDB, Node, Python etc. with their specific versions. But here, we will build a custom Docker image of our Node app.

Click on Repositories and it will take you the the ECR (Elastic Container Registry page) where you can store all your custom Docker images.

Click on Create repository at the top right and you will then get this page.

Create a repository in ECR

In the input, add a name of your choice and then click on Create repository. Now you have a repository of your own and you can push your Docker image containing your app to this repository. I have created a repository and named it node-simple.

My node app repository on ECR

Notice the URI field. That's an important field and we will require it when we push our Docker image to ECR from our local machine.

Click on the repository and it will take you to the images list. Here you can view your app image that we will push to docker soon.

Now let's move on to creating our simple Node app.

Create a new folder, open that folder in your terminal and then run npm init -y to create a package.json file. Then create a file named index.js and add the following contents to it.

const express = require('express')

const PORT = process.env.PORT || 3000

const app = express()

app.get('/', (request, response) => {
  return response.json({
    data: {
      message: `API is functional`,
    },
  })
})

app.listen(PORT, () => console.log(`App running on port ${PORT}`))
Enter fullscreen mode Exit fullscreen mode

We have spun a simple express server with a / GET route that returns some json.

Now run npm i express to install the express package.

Lastly, add a start script in the scripts field of your package.json file.

"scripts": {
  "start": "node index.js"
}
Enter fullscreen mode Exit fullscreen mode

Now, run npm start in your terminal to see the app running on http://localhost:3000/ by default if you have not specified a PORT in your environment. You will see the json message API is functional returned in the browser.

Let's move on to creating our Dockerfile. This is essential for building our image and pushing it to ECR. Create a file named Dockerfile in our folder and add the following content.

FROM mhart/alpine-node:10.16.3

WORKDIR /app

COPY package*.json ./

RUN npm ci

COPY index.js .

CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

We are using alpine-node for a smaller image size. After setting our working directory to /app in the Docker image, we are copying our package.json as well as package-lock.json files for deterministic builds. Then we run the npm ci command to ensure the same package versions are installed as in our lockfile. We then copy the index.js file over to our image and lastly, we add our start command as the main command to be run in our image.

Go back to the AWS console and click on the repository you have created. You will find a button on the right named View push commands.

Your node app repository

Click that and you will get a list of commands to be run on your machine to push the image to AWS ECR in the following manner.

Commands for pushing your app to ECR

Copy the commands and run them one by one in your node app folder. I'm in the us-west-2 region but you can use any region that supports ECS (which are mostly all of them btw).

These commands, when run in order

  1. Logs you into the AWS service with the credentials you have provided.
  2. Builds your app into a Docker image.
  3. Tags your app with respect to the repository you have created.
  4. Pushes your image to the repository.

After successfully completing the above steps, you will be able to see your Docker image in your repository like this.

The successfully pushed Docker image in your repository

This was creating your image. Now let's move on to creating a cluster for our app.

Select Clusters under Amazon ECS and you will be redirected to the clusters list where we don't have any clusters right now. Let's click on the Create Cluster button and then select the EC2 Linux + Networking template and click on Next step.

In this section, give a name to your cluster and in the Instance Configuration section, select the following values.

Instance configuration for your cluster

Note: You need to select a Key Pair if you want to SSH into your instances. It's useful for debugging purposes.

Leave the other options as is, it will create a VPC for you and assign your EC2 instances with IAM role as well so that ECS can connect to your instances and run your docker images.

You will see something like this. I have named my cluster node-simple.

Cluster creation in progress

After it's completed, click on View cluster and it will take you to your create cluster page and it's status will be shown as Active.

You can go to EC2 from your AWS services and you will be able to see that two t2.micro instances have been created. You can SSH into them as well with the public IP of those instances.

EC2 instances created by our ECS cluster

Go back to ECS, and on the left, you will see something called Task Definitions. Click that and you will be taken to a page where you can create a task definition for your cluster.

Task definitions page under Amazon ECS

In simple terms, a task definition is a connection between your ECS cluster and the Docker image residing in ECR. Currently we do not have any task definition so let's create one.

Click on Create new Task Definition and you will be given two options, Fargate and EC2. Select EC2 and proceed to the Next step.

Enter a name for your task definition, leave everything as default until you come to this section.

The Elastic Inference section in Task definition creation

This section helps you specify all the necessary values that your Docker image requires. Click on Add Container and you will see something like this.

Adding a container to your Task Definition

Give a name to your container and in the Image field, copy the URI of the Docker image that you had pushed to ECR and paste it here.

In the port mappings field, add 80 as the Container port and 0 as the Host port. Now you must be thinking that why are we passing 0 as the Host port?

It's because we need our EC2 instance to have dynamic ports to be mapped with the PORT 80 of our Docker container so that multiple containers can be run on the same EC2 instance. 0 means any random port from 32768 to 65535 will be assigned to the EC2 instance. These are also known as Ephemeral Ports.

Also, we have specified PORT 80 for our Docker container, so we have to tell our Node server to run on 80 somehow. How could we achieve that... You're right, using Environment Variables!

Scroll below and you will find the Environnment section. Add your environment variable in the following manner.

Specify the PORT 80 in the environment section

Node will read this PORT using the process.env.PORT variable we have specified in our code.

Leave everything as is and click on Add. You will see your container added along with the ECR image URI that you have passed. Leave the rest of the fields as they are and click on Create. You will be redirected to the task definition page and you will see the task definition along with it's version and all the options we had provided in the previous section.

Now let's add a load balancer that will balance the traffic between our two EC2 instances.

Go to the EC2 service and select Load Balancers from the left section under LOAD BALANCING. It will take you to the Load balancers listing. Right now, we don't have any. So let's create one.

Click on Create Load Balancer and you will get an option to select the load balancer type. Select Application Load Balancer (ALB) as it is highly advanced and supports dynamic mapping of ports in our EC2 instances.

After clicking on Create you will be presented with the load balancer configuration. Give your ALB a name, and leave everything as it is except the VPC. Select the VPC the ECS cluster had created for you instead of the default else the ALB will not work properly. Check all the Availability Zones as our instances will be spinned off in all of those for High Availability.

Configure the basic settings of your Load Balancer

Click Next. You will get a warning that we are using an insecure listener i.e. PORT 80. In production, use an SSL certificate and configure your ALB to listen on 443 (HTTPS) as well. For now, let's ignore this warning and click Next.

Here, you have to configure a Security Group (SG) for your ALB. Let's create a new SG and open the HTTP port 80 to the world as the users will be using the ALB route for accessing our Node API. Add the HTTP rule for our ALB.

Open Port 80 of the Load Balancer for our users

Click Next. This is an important part. Here, we need to create a target group to specify the health check route and the PORT the ALB will be routing traffic on to our EC2 instances.

Create a Target Group for our Load Balancer

Leave everything as is and click Next. You will be taken to the Register Targets page to register our instances in our Target Group we created in the previous page.

Do not register any targets here, as that will be done automatically in the final step when we are creating our service.

Click Next, review the parameters that you have added and then click on Create. This will create the load balancer and give it a DNS which we can call our Node API from.

The created load balancer with its DNS endpoint

Next, we need the EC2 instances to communicate with the ALB so that it can perform health checks and route the traffic to our EC2 instances. For this, we need to add a rule in our EC2 security group.

Click on Security Groups in the left menu under NETWORK & SECURITY. You will find two security groups. One for the EC2 instances and one for the Load Balancer. Click on the EC2 security group which was created by our cluster.

The EC2 and Load balancer security groups

A menu will open below. Select the Inbound tab and click on Edit. This will open a dialog box for editing our security rules. We will delete the rule in place and add our own. Select Custom TCP rule from the dropdown and in the port range add 32768-65535 as our port range. In the source, type sg and you will get a dropdown of the security groups present. Select the load balancer SG and add a description of your choice.

The rule will look something like this.

The inbound rule for our EC2 instances

Note: Also add the SSH port 22 rule if you want to SSH into the EC2 instance.

Click on Save. This completes the Load Balancer setup and takes us into the final part. Creating a service.

Go back to ECS, select your cluster and you will see that very first tab open is the service tab. Click on Create.

Select EC2 as the launch type and give your service a name. You will notice that the task definition is selected automatically. Set the Number of Tasks to 2. This will launch two instances of our Node app image in each of our EC2 instances. Leave the rest of the values as is and click on Next step.

This step is where we configure our Load Balancer. Select Application Load Balancer as that the the type that we have created. You will notice that our LB is automatically selected in the Load Balancer Name. Below that, you will find the container to load balance on.

Container to be added for load balancing

You will see that our container name and the port mapping is already selected. Click on Add to load balancer. A new section will be opened.

In the Production listener port, select 80:HTTP from the dropdown. And in the Target group name, select the target group that we had created while creating the load balancer.

On selecting this, it will load all the values that we had added in the target group while creating our ALB.

In the final section, uncheck the Enable service discovery integration as it's not needed. Click on Next step.

You will be taken to the auto scaling configuration. Do not auto scale now, let that be as an experiment for you after you complete this :)

Click on Next step and you will be taken to the Review of your service that will spin your node app image on the EC2 instances.

Finally, click on Create Service. This will create your service and run the task definitions that we have created. After it's completed, click on View Servie. You will see two task definitions in PENDING state.

The created service spins off two tasks

After some time when you refresh, the status will change to RUNNING. Click on the Events tab. You will get a log of the service adding the tasks to our EC2 instances.

The service logs after spinning the tasks

Once you get something like this, where the service has reached a ready state, you're good to go!

Check the Target Groups in the LOAD BALANCING section of the EC2 service. You will see that the service we have created has automatically registered two targets in our ALB target group and they are healthy.

The EC2 instances registered in the target group of our ALB

Check out the ports, they have been randomly assigned, so that's our Dynamic port mapping in action!

Last but not the least, copy the DNS name of your ALB and paste it in the browser, you will see that your node app is running and you get the API is functional message. Yay!!!

This is how we can deploy our application as a Docker Image via AWS ECS.

Thank you for reading.

Top comments (1)

Collapse
 
duard profile image
Carlos Eduardo

I'm getting 502 bad gateway