DEV Community

Raphael Jambalos
Raphael Jambalos

Posted on • Edited on

Deep Dive on AWS CodeBuild

In the past 2 posts, we built a Rails app in Docker and deployed it in Amazon ECS. In this post, we will use AWS CodeBuild to automate building Docker images and deployments in ECS.

This post is not necessary to set up a functional CI/CD pipeline for our Rails app. This CodePipeline post is much shorter and gets you up and running faster. But if you want to have a deeper understanding of CodeBuild, read along

We will be using Rails for this blog post. If you're using other web frameworks (i.e Sinatra, Django, Flask, etc), this blog post can serve as a reference on automating your deployments in ECS.

1 | Why do we need CodeBuild?

CodeBuild is a fully managed build service in the cloud. It eliminates the need for build servers, which are on 24/7 but are used just a few minutes every deployment.

Deployment without CodeBuild

Before we understand what CodeBuild can do for an ECS setup, we have to run through how a typical deployment is like without it. The steps here are just an overview. For a detailed discussion on how each step is done, click through the link and go to the section specified:

  • Push your code to GitHub / BitBucket
  • (5) Build the Docker image through your local machine - This can also be done in a dedicated build server.
    • (5.1) docker build . -t my-rails-app - to build the Docker image with your app's code and dependencies
    • (5.2) docker tag my-rails-app:latest <your-ecr-repo-uri>:v1.0.1 - tag your docker image with the correct URI. Notice that we incremented the image version by 0.0.1 (from 1.0.0 to 1.0.1).
    • (5.3) $(aws ecr get-login --no-include-email --region ap-southeast-1) - login to AWS ECR
    • (5.4) docker push <your-ecr-repo-uri>:v1.0.1 - push the Docker image to ECR
  • (16) Create a Task Definition Revision - change the Docker image version from 1.0.0 to 1.0.1, and create a new revision.
    • Change the image version in the container definition
  • (17) Update the service to deploy the latest task definition revision

Deployment with CodeBuild

After integrating CodeBuild, our deployment is now reduced to 3 steps:

Benefits

If the reduction in steps (from 7 to 3) made you consider automating your deployments, check out these other benefits to having CodeBuild:

  • It reduces the manual work devs have to do to deploy their applications. So devs can focus on what they do best - develop features.
  • Since devs just have to push their code and press a button, we eliminate the risk that the developer will make a mistake in deployment.
  • It also eliminates the need for a build server, hence saving up on AWS server costs (not to mention the time needed to maintain the build server).
  • CodeBuild requires each dev to have an AWS IAM User in your AWS account. This is good for accountability since you would know in the logs who deployed what, as well as when.
  • This makes way for an automated CI/CD approach where all you have to do is push your code and it will be deployed in production after a series of tests.

2 | Concepts

Build servers usually take application code as input. We set up the build server's environment to have the necessary runtimes to build our application code. The runtime could be Python, Ruby, Docker, etc. We specify a series of build commands to build our application code and produce a package we can deploy. For Docker deployments, this is a Docker image.

Since CodeBuild is just an automated build service, it still needs to perform the steps above. With CodeBuild, though, you don't manage the servers that do the build. It is run inside a Docker container with the runtime/s necessary to build your application. You just need a place to enter all those configurations.

The primary concept in CodeBuild is the Build Project. It answers four questions needed to operate a build server:

Where is the source code?

CodeBuild pulls your code so it can build an artifact from it (for Docker deployments, this is the Docker image). You have to specify where it can pull your code, and give it the necessary permissions to do so.

You can have your code in source code hosts: GitHub, BitBucket or AWS's CodeCommit. You can also have your code in S3 (but seriously, it's hard to maintain a codebase if it's just in S3). You will also be asked to give the necessary permissions. The way to do this is slightly different per source code host.

What build environment are we going to use?

CodeBuild uses a Docker container to build your application. The build environment defines what runtime, OS, and tools will be in the container. These will be available during your build process.

The end goal of our build is to produce a Docker image out of our application code. Hence, the build environment we will choose should have Docker pre-installed.

What build command will we run?

This is a series of commands that you typically execute during the build process. These commands are typically stored in the buildspec.yml.

Where will we input the artifact?

Since we will push the Docker image in the build command, we will choose nothing for this option.

3 | Create a Build Project

  • This assumes you have a GitHub account and that you have a Git repository setup. If not, you can signup for a Github account here, and create a GitHub repository here.
  • You can follow along with the app that we've been building for the last 3 posts by downloading the source code, or you can opt to apply it to your Dockerized application that you've deployed in ECS. Since this post does not have any Rails/Ruby specific configuration that needs to be followed, you can follow along through any application you've built that's already built on top of Docker and deployed using ECS.
  • The Ruby scripts we will include later are well-documented and well-explained. You don't need any Ruby experience to run them. The CodeBuild environment that we would be choosing already has Ruby 2.5 pre-installed, so the Ruby scripts should run just fine.

(3.1) On the services tab, search for CodeBuild and click it. On the page's right-hand side, click "Create build project". Make sure you are in the same region as your ECS cluster.

Alt Text

(3.2) Since we pushed our application to a GitHub repository, choose "GitHub" for the Source Provider. You will be asked to grant GitHub access to AWS CodeBuild. Do this by clicking "Connect to GitHub".

Alt Text

(3.3) Sign in to your GitHub account

Alt Text

(3.4) Authorize AWS CodeBuild to access your GitHub repositories by approving the request: "Authorize aws-codesuite"

(3.5) After that, you will be directed back to the build project. Add the repository URL.

(3.6) Scroll down to the Environment section.

  • Choose "Ubuntu" as your Operating System
  • Choose "Standard" as your runtime
  • Choose "aws/codebuild/standard:1.0" as the image. Do not choose standard:2.0 as this still has problems with Ruby as of writing this post.
  • Choose "Always use the latest image for this runtime version"
  • Make sure to tick the Privileged tickbox. If you don't tick this, we won't be able to build your Docker image.
  • Select "New Service Role". The role that will be created will define what AWS resources you can access during the build process.

Alt Text

(3.7) You should see this page after creating the Build Project:

4 | Configure the application

Now that we have our build project ready, we have to set up scripts to do the following:

  • Notice that we implemented semantic versioning with our Docker images. Our first image version is 1.0.0, the next one is 1.0.1, and so on. Because of this, we have to determine the latest image version (say, it's 1.0.0). We will increment 1 to that so our next image version will become 1.0.1.
  • Since the image version changed (from 1.0.0 to 1.0.1), we have to update our Task Definition. We will create a task definition revision with the new Docker image version. If you're not familiar with Task Definitions, I recommend reading my earlier blog post on ECS concepts.

For this task, I developed 2 Ruby snippets. If your app is not in Ruby, it's okay. We can still use this script since the CodeBuild environment we are using has the Ruby runtime.

(4.1) Create a scripts folder in the Rails root directory by using the command mkdir scripts.

(4.2) Create the file load_latest_image_version_number.rb in the scripts folder. Copy and paste the script from below.

This file looks for the latest image version by getting all Docker images pushed to the same repository name, looping through each one to get the latest tag. Do make sure to change the repository_name variable if your ECR repository is named differently.

# frozen_string_literal: true

require "aws-sdk-ecr"

# (1) Fetch all Docker images pushed to ECR ever (that's why there's a loop)
# so we can pull the old old ones as well

major_version = "v1"
repository_name = "sample-docker-rails-app"

client = Aws::ECR::Client.new

image_fetcher = client.list_images(repository_name: repository_name)
images = image_fetcher.image_ids
counter = 1

until image_fetcher.next_token.nil?
  counter += 1

  image_fetcher = client.list_images(repository_name: repository_name,
                                     next_token: image_fetcher.next_token)
  images += image_fetcher.image_ids
end

# (2) We only want to look at images with tag "v1.0.x", so filter the others out.

image_tags = images.map(&:image_tag)

relevant_images = image_tags.compact.select do |t|
  c = t.split(".")

  c[-2] == "0" && c[-3] == major_version
end

# (3) Get the max, add 1 and that's the version we use!
latest_series_zero_image = relevant_images.map { |t| t.split(".")[-1].to_i }.max

# (4) Output the version to standard out!
puts "#{major_version}.0.#{latest_series_zero_image + 1}"
Enter fullscreen mode Exit fullscreen mode

(4.3) Create the file update_task_definition.rb in the scripts folder. Copy and paste the script from below. The code is explained in the comments of each block. In this script, you will notice the extensive use of environment variables. Where are those env variables coming from? We will define it in a later section. For now, consider all of them filled up properly.

This snippet is divided into 4 parts. These 4 parts, together, do the following:

  • It reads the environment variables of your CodeBuild setup. In part 2, those prepended with ECS_ will be placed in the task definition.
  • We have 2 task definitions to update: web and Sidekiq. These two task definitions are just 20% different from one another. We define a variable called base_task_definition in part 1 for the 80% similarity between these two task definitions. In each task definition, we also set each task definition.
  • For the 20% difference, we manually set them in part 3. What's the 20% difference?
    • The command to run the web service is puma -C config/puma.rb. For Sidekiq, we have sidekiq -C config/sidekiq.yml.
    • Sidekiq and web are configured to put their logs in different log groups in CloudWatch.
    • The name of the task definition of Sidekiq and web are different.
  • We then push the 2 task definitions into AWS so we will have a new revision that we can deploy.
# frozen_string_literal: true

require "aws-sdk-ecs"

# (PART 1) Define the ECS client, deep_copy method and the base task definition

client = Aws::ECS::Client.new

def deep_copy(o)
  Marshal.load(Marshal.dump(o))
end

base_td = {
            container_definitions: [
              {
                log_configuration: {
                  log_driver: "awslogs",
                  options: {
                    "awslogs-stream-prefix" => "ecs",
                    "awslogs-region" => "ap-southeast-1"
                  }
                },
                port_mappings: [
                  {
                    host_port: 0,
                    protocol: "tcp",
                    container_port: 8080
                  }
                ],
                name: "web"
              }
            ],
            placement_constraints: [],
            memory: "1024",
            cpu: "512",
            volumes: []
          }

# (PART 2) All env variables prepended "ECS_" will be included in the task definition

env_vars = []

relevant_envs = ENV.select { |k, _| k[0..3] == "ECS_" }

relevant_envs.each do |key, value|
  # skip this variable from being included
  next if key == "ECS_CONTAINER_METADATA_URI"

  proper_key = key.gsub("ECS_", "").to_sym
  env_vars << {
    name: proper_key,
    value: value
  }
end

# (PART 3) Define how the web task definition and the Sidekiq task definition differs

log = {
  web: ENV["CLOUDWATCH_WEB_LOG_GROUP"],
  sidekiq: ENV["CLOUDWATCH_SIDEKIQ_LOG_GROUP"],
}

base_td[:container_definitions][0][:image] = "#{ENV['REPO_URL']}:#{ENV['LATEST_VERSION']}"
base_td[:container_definitions][0][:environment] = env_vars

web_td = deep_copy(base_td)
web_td[:container_definitions][0][:command] = ["puma", "-C", "config/docker_puma.rb", "-p", "8080"]
web_td[:container_definitions][0][:log_configuration][:options]["awslogs-group"] = log[:web]
web_td[:container_definitions][0][:name] = "web"
web_td[:requires_compatibilities] = ["EC2"]
web_td[:family] = ENV["TASK_DEFINITION_WEB"]
web_td[:network_mode] = nil

sidekiq_td = deep_copy(base_td)
sidekiq_td[:container_definitions][0][:command] = ["sidekiq", "-C", "config/sidekiq.yml"]
sidekiq_td[:container_definitions][0][:log_configuration][:options]["awslogs-group"] = log[:sidekiq]
sidekiq_td[:container_definitions][0][:name] = "web"
sidekiq_td[:requires_compatibilities] = ["EC2"]
sidekiq_td[:family] = ENV["TASK_DEFINITION_SIDEKIQ"]
sidekiq_td[:network_mode] = nil

# (PART 4) Create a new revision of the web and Sidekiq task definitions

client.register_task_definition(web_td)
client.register_task_definition(sidekiq_td)

Enter fullscreen mode Exit fullscreen mode

(4.4) We also have a buildspec.yml. This file is required for CodeBuild to run and required to be inside the root directory of your application. Consider buildspec.yml as a set of instructions you give CodeBuild on how to build your image.

Typical buildspec.yml documents have 2 main components: (i) version, and (ii) phases. The version is always 0.2. The phases portion defines the lifecycle of a typical build process. For our setup, we choose to define the lifecycle this way:

  • Pre-build
    • Login to AWS ECR
    • Install aws-sdk-ecr and aws-sdk-ecs. These Rubygems will be used by the script in 4.2 and 4.3.
    • Get the latest version of our image by executing the script we described in 4.2
    • Create the shared/pids and shared/sockets folder for Puma to have a place to store its PID.
  • Build
    • Do the Docker build and tag the image properly
  • Post-build
    • Push the image to Amazon ECR
    • Update the task definition by running the script we described in 4.3
version: 0.2 

phases: 
  install:
    runtime-versions:
        docker: 18
    commands:
      - nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2&
      - timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
  pre_build:
    commands:
    - echo Logging in to Amazon ECR....
    - aws --version
    - $(aws ecr get-login --no-include-email --region $CI_REGION)

    - echo "including rubygems.org as repo"
    - gem sources --add https://rubygems.org/

    - echo "installing aws-sdk-ecr"
    - gem install 'aws-sdk-ecr'

    - echo "installing aws-sdk-ecs"
    - gem install 'aws-sdk-ecs'

    - echo "load the latest ECR image revision number"
    - LATEST_VERSION=$(ruby scripts/load_latest_image_version_number.rb)
    - echo the version now is $LATEST_VERSION

    - echo "Creating folders for pid files"
    - mkdir shared
    - mkdir shared/pids
    - mkdir shared/sockets
  build: 
    commands: 
    - echo Build started on `date`
    - echo Building the Docker image...
    - docker build -t mydockerrepo .

    - docker tag mydockerrepo:latest $REPO_URL:$LATEST_VERSION
  post_build: 
    commands: 
    - echo Build completed on `date` 
    - echo pushing to repo
    - docker push $REPO_URL:$LATEST_VERSION
    - ruby scripts/update_task_definition.rb
Enter fullscreen mode Exit fullscreen mode

(4.4) Edit the homepage via vi app/views/home/index.html.erb and add the snippet at the bottom of the file. This will prove that our deployment via CodeBuild was successful.

<h1> Updated via CodeBuild! <h1>
Enter fullscreen mode Exit fullscreen mode

(4.5) Review the changes you've made. If you're satisfied with them, commit and push to Github.

git add .
git commit -m "Add initial scripts, and change to the HTML part"
git push origin master
Enter fullscreen mode Exit fullscreen mode

5 | Add Environment Variables

In step 4.3, we made a Ruby script that references a lot of environment variables. In this section, we will configure these environment variables so that the script can do its job.

(5.1) Go to CodeBuild in the services tab. Go to Build Projects, and look for the build project we just created. For me, that's named ruby-docker-app. Click the "Edit" dropdown, and choose "Environment".

Alt Text

(5.2) Click the "Additional configuration" section to reveal the environment variables section.

Alt Text

(5.3) We will place the environment variables here. We have 2 kinds of environment variables:

  • Variables we add in the task definition - These variables have to be prepended with "ECS_". This signals to the script in 4.3 that they will be included in the task definition. Hence, these variables will be available to our application.
    • ECS_POSTGRES_DB, ECS_POSTGRES_HOST, ECS_POSTGRES_PASSWORD, ECS_POSTGRES_USER - the postgresql database connection information.
    • ECS_RAILS_ENV - set to staging.
    • ECS_RAILS_LOG_TO_STDOUT - set to ENABLE to let logs be ingested by CloudWatch
    • ECS_RAILS_MASTER_KEY - set to the Rails master key
    • ECS_REDIS_URL - set the Redis URL
  • Variables we will use exclusively for the build process - these variables are essential for us to build the image, but we will no longer use them after.
    • REPO_URL
    • TASK_DEFINITION_WEB, TASK_DEFINITION_SIDEKIQ - set to the name of the task definition of web and Sidekiq. For me, that's docker-rails-app for web, and docker-rails-app-sidekiq for Sidekiq.
    • CLOUDWATCH_WEB_LOG_GROUP, CLOUDWATCH_SIDEKIQ_LOG_GROUP
    • CI_REGION - set to ap-southeast-1, or the region where your ECS cluster is deployed.

Alt Text
(5.4) After you save it, go back to the build project and click "Start Build".

Alt Text

You should see the form below. For the source section, set source version to "master", and then, click Start Build.

Alt Text

(5.5) You should this screen. If you press 'tail logs', you will see a lot of logs from the build process.

Alt Text

(5.6) The build process will break because we didn't give CodeBuild permissions to access ECR and ECS. In the next section, we will set up the required permissions.

Alt Text

6 | Setting up permissions for CodeBuild

(6.1) Create an IAM policy. Set the option to JSON and paste this JSON below. Make sure to replace the YOUR_AWS_ACCOUNT_NUMBER with your account number. This policy allows us to create Task Definition revisions (without giving ECS full access).

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ecs:RegisterTaskDefinition",
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::<<YOUR_AWS_ACCOUNT_NUMBER>>:role/ecsTaskExecutionRole"
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

Alt Text

(6.2) Review the IAM Policy. Once you're satisfied with it, hit "Create policy". Name the policy "ecs-policy-for-code-build".

Alt Text

(6.3) Get the name of the build project's service role.

On a separate tab, go to the build project page, click Edit dropdown, and then click "Environment". You should see the service role as displayed below:
Alt Text

(6.4) Then, Go to IAM Roles. Search for the IAM Role you got in step 6.3. Click the role.
Alt Text

(6.5) Click Attach Policies

Alt Text

(6.6) Search and attach the AmazonEC2ContainerRegistryPowerUser to provide power user access to ECR. This allows us to get the latest image version, and also to push images in ECR.

Then, search and attach the policy we created in step 6.2. If you followed along precisely, it should be named ecs-policy-for-code-build.

Alt Text

Your IAM Role should now have 3 roles.
Alt Text

Now, our build project is properly configured to run builds.

(6.7) Run the build again, following step 5.4. Once that is completed, go to the web and Sidekiq services. You should see that there's a new task definition revision (and the task definition currently in those services are now outdated).

Alt Text
Alt Text

For both services, update them to the latest Task Definition Revision and deploy them. Follow section 17 of my earlier blog post for specific steps on how to do this.

(6.8) Wait a few minutes. If everything went well, you should be able to see the updated homepage with the text: "Updated via CodeBuild!"

Alt Text

Finish!

Now, we can deploy our ECS Rails app via CodeBuild! As with any infrastructure project, the bulk of the work is during setup. But after this, your deploys should be much easier!

If you have any comments, suggestions or just want to let me know how this series has helped you, feel free to leave a comment below, or message me! I'd love to hear from you!

Special thanks to my editor, Allen, for making my posts more coherent.

Top comments (2)

Collapse
 
superslau profile image
Simon Lau

Hey Raphael, how do you suggest handling things like DB migrations as part of code-build?

Or is that more of a code-deploy thing?

Collapse
 
raphael_jambalos profile image
Raphael Jambalos

It’s more of a code deploy thing.

For our team, we don’t have code deploy yet, so we just manually deploy to our Sidekiq service, do the db migration there, and then, we deploy to the web service. After that, we re-deploy to web and sidekiq again.

In code deploy, i imagine that you will have to choose one service (for me its Sidekiq), and do the deployment there before proceeding with the deploying all the other services.