DEV Community

Raphael Jambalos
Raphael Jambalos

Posted on • Edited on

Automate Deployments with AWS CodePipeline

In the past two posts, we built a Rails app in Docker and deployed it in Amazon ECS. In this post, we use AWS CodePipeline to automate building Docker images and deploy our application to our ECS service.

If you want to follow along without having to create the Rails application from the first post, you can clone the latest version of the repository in my GitHub page.

1 | Concepts

Why even bother with CI/CD Pipelines? 🀷🏻

As a project gets bigger, more and more time is eaten away by a manual deployment process. Imagine having your developers having to SSH to 5-10 EC2 instances running your app to run git pull origin develop and restart the application. This process can take 5-20 minutes of your developer per deploy. If he/she makes mistakes, the developer would have to repeat the process all over again.

Having a CI/CD pipeline allows you to have a reliable and consistent deployment process. Your developers no longer have to worry about making mistakes in deployment because they no longer have to do it manually. They just push to master and go. 🟒

CI is Continuous Integration

CI is a coding philosophy that encourages teams to merge small changes frequently. Every time these changes are merged, an automated build and a suite of tests run to prove this change is compatible with the existing code.

CD is Continuous Delivery

CD is an extension to CI that allows new changes to be deployed rapidly. Every time you merge to master, the CI runs. The app is then deployed to a staging environment for testing. A manual approval process stands in the way of this change being deployed straight to production.

CD is also Continuous Deployment 🚚

Once your team reaches a level of confidence in your test suite, you can have changes deployed straight to production.

What we will do in this post

For this post, we will create a 3-stage pipeline:

  • Source: Our code will be stored in GitHub. When developers push to master, the rest of the CI/CD pipeline is triggered.
  • Build: In this stage, we simply build our Docker image and push that image to ECR.
  • Deploy: We will deploy to the "web" ECS service. To keep things simple, we won't try to deploy to the "sidekiq" ECS service that we made in the previous post.

Alt Text

2 | buildspec.yml changes

To create a CI/CD pipeline, we have to create a buildspec.yml file in the root directory of our project. This file serves as a set of instructions for CodeBuild on which commands to use for the build process. We can summarize what our buildspec.yml does in the simple steps below:

  • Authenticate with AWS ECR
  • Create shared folders
  • Build and tag the Docker image
  • Push the Docker image to ECR
  • Create an artifact named "imagedefinitions.json" that specifies the name of the container to update inside the ECS Service. If you've been following this blog series, it should be named "web" . If you want to look for the name of the container in your own ECS service, the image below should look familiar:

Alt Text

If you want to learn more about CodeBuild, you may visit my earlier post on how CodeBuild works.



version: 0.2 

phases: 
  install:
    runtime-versions:
        docker: 18
    commands:
      - nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2&
      - timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
  pre_build:
    commands:
      - echo Logging in to Amazon ECR....
      - aws --version
      - $(aws ecr get-login --no-include-email --region $CI_REGION)

      - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
      - echo "The commit hash is $COMMIT_HASH"
      - IMAGE_TAG=${COMMIT_HASH:=latest}

      - echo "Creating folders for pid files"
      - mkdir shared
      - mkdir shared/pids
      - mkdir shared/sockets

  build: 
    commands: 
      - echo Build started on `date`
      - echo Building the Docker image...
      - docker build -t $REPO_URL:latest .
      - docker tag $REPO_URL:latest $REPO_URL:$IMAGE_TAG
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker images...
      - docker push $REPO_URL:latest
      - docker push $REPO_URL:$IMAGE_TAG
      - echo Writing image definitions file...
      - printf '[{"name":"web","imageUri":"%s"}]' $REPO_URL:$IMAGE_TAG > imagedefinitions.json
artifacts:
    files: imagedefinitions.json


Enter fullscreen mode Exit fullscreen mode

3 | CodePipeline Setup

In the blog post series, we created two ECS services: one for web, and another one for Sidekiq. In this post, we create a CI/CD pipeline for the web service only. In the next post, I will modify this CI/CD pipeline to also update the Sidekiq service

(3.1) In the service menu, search for CodePipeline. We must create the pipeline in the same region as the ECS cluster.

Alt Text

(3.2) Then, click "Create Pipeline."

Alt Text

(3.3) Name the pipeline "ruby-docker-app-pipeline". Here, CodePipeline defaults to creating a new service role.

Alt Text

(3.4) For the source stage, choose the source provider where your code is stored. For me (and for you as well, if you've been following the series), I chose GitHub. Then, click "Connect to GitHub."

I think CodeCommit offers the best integration with CodePipeline and other AWS services. But it hasn't quite caught up with GitHub / Bitbucket in terms of ease of use.

Alt Text

(3.5) A pop-up screen then appears, asking you to authorize access to your GitHub account.

Alt Text

(3.6) Then, specify a branch. A push to this branch triggers CodePipeline (via GitHub webhooks) to start the CI/CD process. We usually delegate the master branch for the CI/CD process because we want to deploy the code in the master branch to our production environment. But there are also times when we create CI/CD pipelines for other branches.

Then, click Next.

Alt Text

(3.7) In the build stage, we use CodeBuild to build and push our Docker image. To do this, we create a new build project by clicking "Create Project."

Alt Text

(3.8) Name the project as "ruby-docker-app-demo," and add any description you like to.

Alt Text

For the environment, choose Ubuntu as the OS and the latest aws/codebuild/standard:4.0 image. Make sure to enable the privileged flag so we can build Docker images.

We aren't very particular about our environment because we don't have any Ruby-based code to run. But in real-world CI/CD pipelines, they usually run unit tests in the build stage. With that, you would have to pick the environment that contains the version of the programming language you want to run the tests on. You can find it here.

We also create a new service role that CodeBuild uses when it runs our build process.

Then click, "Next."

Alt Text

Then, we are redirected back to the CodePipeline page. Here you see that a CodeBuild project has been added. After that, click Next.

Alt Text

(3.9) Now, we are on the deploy stage. We specify the cluster name and the service name of the ECS Service we want to deploy to.

Alt Text

(3.10) Then, review the configurations. Once you are satisfied, hit "Create."

Alt Text

(3.10) Once a pipeline has been created, it automatically tries to run your CI/CD process. It shows you its progress in the visual representation of your pipeline.

Alt Text

(3.11) After a few minutes, the build stage fails because we haven't added our environment variables, and we haven't updated our CodeBuild service role.

To solve this, go to your CodeBuild project, and click on the service role under Build details.

Alt Text

Then, we attach the AmazonEC2ContainerRegistryFullAccess policy to this role. This policy allows CodeBuild to push images to ECR so we can use it in the deploy stage later on.

Alt Text

(3.12) Another thing we have to do is to add environment variables. Looking closely at our buildspec.yml in Section 2, the build process needs two variables: $REPO_URL and $CI_REGION.

To do this, go to CodeBuild and find the build project that we created in step 3.8. If you followed the naming, it should be named "ruby-docker-app-demo." Then, under the "Edit" dropdown on the upper right, click "Environment."

Alt Text

On the next page, expand the "Additional Configuration" dropdown. There, find the environment variables section. Add the two environment variables REPO_URL and CI_REGION.

  • CI_REGION - For this variable, just place the AWS region you are currently in
  • REPO_URL - Add the simple URL of your ECR repository. The format of the URL looks like the example below. If you followed this blog series, you would have had created an ECR repository in section 3 of this post.
    • <<ACCOUNT-NUMBER>>.dkr.ecr.us-west-2.amazonaws.com/sample-docker-rails-app

Alt Text

(3.13) With the environment variable and the CodeBuild role fixed, we can run the pipeline properly. Go to CodePipeline and find the pipeline we just created. In the build stage, hit "Retry."

Alt Text

After a few minutes, you should see the pipeline fully deployed.

4 | Testing the CI/CD Pipeline

To truly demonstrate that our CI/CD pipeline works, we create and push a change in our master branch. We should then be able to see it on our website a few minutes later.

(4.1) To demonstrate that something has changed, we show the site before adding a change. Get your ALB's URL and paste it in your browser. If you aren't sure where to find the URL, step 12.1 of this post demonstrates how to look for it.

You should be able to see this site:

Alt Text

(4.2) Let's go back to the application code. In the app/views/home/index.html.erb file, let's add a new line at the bottom.



Home Page:

<%= @message %>

<% @posts.each do |post| %>
  <h1> <%= post.title %> </h1>
  <h2> <%= post.author.name %> </h2>
  <p> <%= post.body %>

  <br>

  <p> 
    <%= link_to "Like", increment_async_path(post_id: post.id), method: :post %>

    Likes:
    <%= post.likes_count %> 
  </p>
<% end %>

<h1> Updated via CodeBuild! </h1>
<!-- THIS IS THE NEW LINE  -->
<h1> Update just now via CodePipeline!! </h1>


Enter fullscreen mode Exit fullscreen mode

And then, let's commit and push this code:



git add -p
git commit -m "Added line for CodePipeline deployment."
git push origin master


Enter fullscreen mode Exit fullscreen mode

(4.3) After pushing your update, you should see that our CI/CD pipeline is once again busy at work:

Alt Text

(4.4) After a few minutes, you should see our code update live on our website.

Alt Text

5 | Congratulations πŸ₯‡πŸ₯‡πŸ₯‡

You built your own Rails app in Docker and created a full CI/CD pipeline for it! You are now ready to take full advantage of CI/CD for your team.

Getting to this stage is no small feat. If you need any help implementing the walkthrough above, just let me know. Feel free to leave a comment below, or message me! I'd love to hear from you!

Special thanks to my editor, Allen, for making my posts more coherent.

Top comments (16)

Collapse
 
nicobuchhalter profile image
NicoBuchhalter • Edited

Hey Raphael, Thanks for this!
I have two questions:
1) Im not being able to find what to use instead. of "web" in. the buildspec. Where do you find your container name? Is it related to the task definitions? If that's so, how would we handle the sidekiq and web different build commands?
2) How would you configure two have a production and a staging environment?

Collapse
 
raphael_jambalos profile image
Raphael Jambalos

Hi Nico, addressing your concerns below:

" Im not being able to find what to use instead. of "web" in. the buildspec."

  • Yup, it is inside the task definition. When you open up the most recent version of your task definition, scroll down to find the "Container Definitions". There should be a table of some sorts there where you will find a column for "Container Name".

"how would we handle the sidekiq and web different build commands?"

  • The build commands should be the same, at least for this tutorial series. What's different is the command used to start each container. For rails, its a variant of "rails server -p 3000". For sidekiq, its something like "bundle exec sidekiq -C sidekiq.yml". You need not worry about this in the CI/CD for this should have already been differentiated in the task definition

"How would you configure two have a production and a staging environment?"

  • Two separate CI/CD pipelines will be best.
Collapse
 
nicobuchhalter profile image
NicoBuchhalter

Thank you!! I could run the deployments!!

One more thing maybe you know how to do it. Before I did it manually but its the same thing, when I update a ECS service with a new task definition, it takes forever to update the task definition of the service and now with CodePipeline its the same thing, the Deploy stage its not ending because the deployment is not being done.
In the past I solved this by stopping the task but its not good for production environment of course.
Is there some configuration that I may be missing or is just how it is?
Thanks again!

Thread Thread
 
raphael_jambalos profile image
Raphael Jambalos

Hi Nico,

Ahh yes, it does take awhile. This is because you are using ECS deployment controller. Essentially, during deployment it creates new tasks with the new version while the old version is running. The way I understood it is traffic is only redirected when the container reaches a healthy status and it passes the load balancer target group health check. To quote AWS documentation:

"If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total."

Now, the default for "load balancer target group health check to return a healthy status" is 5 consecutive periods of 30seconds. So that's at least 2.5minutes after your containers are marked as healthy that traffic starts to get to your instances.

Thread Thread
 
raphael_jambalos profile image
Raphael Jambalos

I'd totally recommend going for AWS Code Deploy's blue/green deployment so your traffic (or at least part of it) will be shifted to the new version right away (or in parts, over a period of time). Rollbacks are also so much easier with this design.

Thread Thread
 
nicobuchhalter profile image
NicoBuchhalter

Perfect! Yes, that's the type of deployment Im looking for. I will try to configure it.
I figured out that my problem is that the container has 1024 MB of CPU units available and with both my tasks (each with 512), it reaches this limit so when trying to do a new deployment, I guess it tries to create the new task first and then stop the old one, but there's no CPU to have the "third" task running simultaneosly. Do you know where I can configure how much CPU does the container have?

Or should I use different containers for web and sidekiq? Maybe add to the buildspec to create two artifacts and configure each deployment to look at a different json file? What do you think?

Thread Thread
 
raphael_jambalos profile image
Raphael Jambalos • Edited

Hi Nico,

" I guess it tries to create the new task first and then stop the old one, but there's no CPU to have the "third" task running simultaneosly. "

  • This is usually the case so probably you need to scale up to 2 EC2 instances so you'd have 2 instances with 1024 units of CPU. There's also the concept of ECS Capacity Providers so you can auto scale the EC2 instances based on the number of instances required by the containers being deployed (docs.aws.amazon.com/AmazonECS/late...).
  • The old approach was to deploy an Auto Scaling Group behind those EC2 instances but the problem with that is you often have a demand for instances to serve the containers but the EC2 instances themselves don't have a spike in their CPU utilization.

Do you know where I can configure how much CPU does the container have?

  • By the looks of this, you are trying to have the web and sidekiq in one Task Definition. So you probably have 2 containers in your task definition's container definition. I recommend just have separate task definitions for web and for sidekiq.
  • But if you want this setup, you can find the CPU and Memory options inside the container definition.

Or should I use different containers for web and sidekiq?

  • Different Task definitions, I believe. Can you elaborate on this?

Maybe add to the buildspec to create two artifacts and configure each deployment to look at a different json file? What do you think?

  • Yes, if its 2 containers inside one task definition, this can be the case.
  • I reco having separate task definitions for them so u can deploy them as separate ECS Services. So you can decouple them. Like if you have a spike in web, you don't need those extra containers deployed for sidekiq.
Collapse
 
jayehernandez profile image
Jaye Hernandez

This post is worth the wait! πŸ’―This will be very helpful to us, clicking through each of the services is a hassle. πŸ˜…

I know you didn't touch on deploying to the "sidekiq" ECS service to make things simple, but just curious, if we need to do that, will we create a separate pipeline for it?

Collapse
 
raphael_jambalos profile image
Raphael Jambalos

Hi Jaye, no need to create a separate pipeline. Just add a new deploy stage in the pipeline to deploy to those ECS service. You may have to change the container name of your sidekiq container from "sidekiq" to "web".

Collapse
 
jayehernandez profile image
Jaye Hernandez

Ahh got it, thanks!! πŸ™πŸ»

Collapse
 
nicobuchhalter profile image
NicoBuchhalter

By renaming to web, wouldnt it be pointing into other Task Definition?

Collapse
 
thejazz15 profile image
Tejas Tholpadi

Thanks for this post!
How would you suggest handling one-off tasks. For example, database migrations that would need to be run before the Deploy phase in CodePipeline? Another example would be running tasks like rails console at any point of time.

Collapse
 
raphael_jambalos profile image
Raphael Jambalos

Hi Tejas, sorry late reply, been swamped with work lately.

For database migrations, you can include it in your build process but you must have a process in doing so. Like forbidding "destructive" migrations (delete table / delete column) from being run inside the CI/CD. We just implemented this but only "additive" migrations is allowed (add column / add table).

Also, best to run a point-in-time snapshot of the RDS database every time you run this.

For rails console tasks, we just enter the container via docker exec. We have fargate containers for prod, but we left out 1 ECS container running on EC2 for this exact purpose.

Collapse
 
thejazz15 profile image
Tejas Tholpadi • Edited

No worries and I appreciate you taking out time to reply. Here's what I ended up doing:

For the database migrations - I did include it in the build process but I separated the migrations into another stage in the pipeline and implemented something similar to what the aws-rails-provisioner gem does (Basically uses another buildspec for the release stage).

For one-off tasks - I ended up writing a shell script which runs a task, waits for it to be placed in a container, runs docker exec -ti to open the console and kills the task when the console is closed. The script could be run by something like:
bash rails-console.sh --cluster "cluster_name" --task-definition "task_def_name" --profile "cli_profile_name"

Collapse
 
wobsoriano profile image
Robert

Saving this for later πŸš€

Collapse
 
raphael_jambalos profile image
Raphael Jambalos

Thanks man! Let me know what you think :D