DEV Community

Cover image for Shipping Python Code to AWS ECS using Github Actions
Espoir Murhabazi
Espoir Murhabazi

Posted on • Edited on

Shipping Python Code to AWS ECS using Github Actions

This is the last post of this series. In the first post we learned how to build the ship for our boatload: The CloudFormation Stack and its different objects); in The second we learned how to build containers; finally, in this one, we will find how to ship those containers to our boat using Github Actions.

This is not a post about Github Actions or CI/CD, to get started with those concepts there are a tremendous amount of tutorials online for that.

If by any chance you are not familiar with CI/CD or Github actions in general refer to this guide and this one to get started.

Getting started

To get started download a sample project we will be using by running the following command in your cmd. I hope you have git installed in your machine.

git clone https://github.com/espoirMur/deploy_python_to_aws_github_actions.git

As you can see this is just a dummy project which runs with run four docker containers.

You can follow the readme to get the project running for you.

What we will accomplish and the tools we will use:


Our architecture and workflow in a nutshell

As you can see in the picture our actions, on every push to the master branch, will build a docker image for our application, log in to ECR, push the image to the ECR, update the task definition with the new image pushed URL, and start the service with the associated task definition in the AWS Cluster.

Here is a list of the GitHub actions we will be using :

  • Configure-aws-credentials: This will help to configure AWS credential and region environment variables for use in other GitHub Actions.
  • Amazon-ecr-login: This will enable us to log in to the local Docker client to one or more Amazon Elastic Container Registry (ECR) registries. After logging, we can therefore push our docker images to the registry.
  • Amazon ECS-render-task-definition: This will help us to render the docker image URI to the task definition.
  • Amazon ECS-deploy-task-definition: This is the action that does the real deploy for us. It will register the AWS task definition to ECS and then deploys it to an Amazon ECS service.
  • Docker Buildx: This action will help us to set up the most recent version of the docker build: buildx which support caching. It is not mandatory if you don’t need to use caching you can skip it.

Back To the Business: The code we want to deploy.

Let go back to the project I introduced in the beginning and we will work from it. From your command line move to the project directory :

cd deploy_python_to_aws_github_actions

Activate your virtual enviroment with :

source .venv/bin/activate

Creating the Github actions:

To create Github Actions we can add them from the Github UI or do it from the command line. To perform that operation via command line you need to have a folder called .github/workflows in your project directory and add your action .yml file within it.

Let us create the folder:mkdir .github && mkdir .github/workflows

Then we can create our action file with
touch .github/workflows/deploy_aws.yml

Setting up

In the deploy to AWS action we add the following code :

on:

 push:

  branches:

   - master

 name: Deploy to Amazon ECS
Enter fullscreen mode Exit fullscreen mode

In this line we are only specifying the event that will trigger our action, this action will be triggered on a push to master.

Next, let us specify the set of job that our actions will run:

jobs:

 deploy:

  name: Deploy

  runs-on: ubuntu-latest
Enter fullscreen mode Exit fullscreen mode

This tells our job to run on the ubuntu instance. The job has the following steps

steps:

- name: Checkout

uses: actions/checkout@v1
Enter fullscreen mode Exit fullscreen mode

This action checks-out your repository under $GITHUB_WORKSPACE, so your workflow can access it.

- name: Set up Python python-version

  uses: actions/setup-python@v1

  with:

   python-version: 3.7
Enter fullscreen mode Exit fullscreen mode

This action set up the python version to use for our application.

- name: Set up QEMU
  uses: docker/setup-qemu-action@v1

- name: Set up Docker Buildx
  uses: docker/setup-buildx-action@v1
Enter fullscreen mode Exit fullscreen mode

This one set up the docker build tools we will be using.

- name: create docker cache
  uses: actions/cache@v1
  with:

   path: ${{ github.workspace }}/cache

   key: ${{ runner.os }}-docker-${{ hashfiles('cache/**') }}

   restore-keys: |
    ${{ runner.os }}-docker-
Enter fullscreen mode Exit fullscreen mode

This one creates the cache we will be using in the build phase.

- name: generating the config files

run: |

echo '''${{ secrets.CONFIGURATION_FILE }}''' >> .env

echo "done creating the configuration file"
Enter fullscreen mode Exit fullscreen mode

This one generates our configuration file, so basically if you have environment variables in a .env file, these actions will generate them back.

- name: Configure AWS credentials

uses: aws-actions/configure-aws-credentials@v1

with:

aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}

aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

aws-region: us-east-2
Enter fullscreen mode Exit fullscreen mode

As the name stated this action will configure your AWS credentials so that you can easily log in to the ECR.
Don’t forget to add your credentials to your Github repository secrets. If you are not familiar with how to add secrets to GitHub refer to this guide.

- name: Login to Amazon ECR

  id: login-ecr

  uses: aws-actions/amazon-ecr-login@v1
Enter fullscreen mode Exit fullscreen mode

As the name stated this use the credentials set up in the previous steg to login to the container registry.

Once we are login we can now build the container and push it to the container registry.

- name: Build, tag, and push the image to Amazon ECR

  id: build-image

  env:

   ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}

   ECR_REPOSITORY: ecs-devops-repository

   IMAGE_TAG: ${{ github.sha }}

run: |

docker buildx build -f Dockerfile --cache-from "type=local,src=$GITHUB_WORKSPACE/cache" --cache-to "type=local,dest=$GITHUB_WORKSPACE/cache" --output "type=image, name=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG,push=true" .

echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
Enter fullscreen mode Exit fullscreen mode

This builds the container and pushes the container registry. Note that the output of this step is the image URI or image name, we will need it in the next step.

In the next step, we will fill the image name in each container definition in our task-definition file so that the docker container will be pulling the newly built docker image.

There are 3 steps in sequence. The output of one step is used in the next step.

- name: Fill in the new image ID in the Amazon ECS task definition of the beat container

id: render-beat-container

uses: aws-actions/amazon-ecs-render-task-definition@v1

with:

task-definition: ./.aws/task-definition.json

container-name: celery-beat

image: ${{ steps.build-image.outputs.image }}

- name: Fill in the new image ID in the Amazon ECS task definition of the flower container

id: render-flower-container

uses: aws-actions/amazon-ecs-render-task-definition@v1

with:

task-definition: ${{ steps.render-beat-container.outputs.task-definition }}

container-name: flower

image: ${{ steps.build-image.outputs.image }}

- name: Fill in the new image ID in the Amazon ECS task definition of the worker container

id: render-worker-container

uses: aws-actions/amazon-ecs-render-task-definition@v1

with:

task-definition: ${{ steps.render-flower-container.outputs.task-definition }}

container-name: celery-worker

image: ${{ steps.build-image.outputs.image }}
Enter fullscreen mode Exit fullscreen mode

With the task definition updated we can now push the task definitions to the service and start running the service.

- name: Deploy Amazon ECS task definition

uses: aws-actions/amazon-ecs-deploy-task-definition@v1

with:

task-definition: ${{ steps.render-worker-container.outputs.task-definition }}

service: ecs-devops-service

cluster: ecs-devops-cluster

wait-for-service-stability: false
Enter fullscreen mode Exit fullscreen mode

This is the step that does the actual deployment, it pushes the task definitions to the service which starts the tasks.

With this added we can make sure we have the following content in our .github/workflows/deploy_aws.yml file.

on:

push:

branches:

- master

name: Deploy to Amazon ECS

jobs:

deploy:

name: Deploy

runs-on: ubuntu-latest

steps:

- name: Checkout

uses: actions/checkout@v1

- name: Set up Python python-version

uses: actions/setup-python@v1

with:

python-version: 3.7

- name: Set up QEMU

uses: docker/setup-qemu-action@v1

# https://github.com/docker/setup-buildx-action

- name: Set up Docker Buildx

uses: docker/setup-buildx-action@v1

- name: create docker cache

uses: actions/cache@v1

with:

path: ${{ github.workspace }}/cache

key: ${{ runner.os }}-docker-${{ hashfiles('cache/**') }}

restore-keys: |

${{ runner.os }}-docker-

- name: generating the config files

run: |

echo '''${{ secrets.CONFIGURATION_FILE }}''' >> .env

echo "done creating the configuration file"

- name: Configure AWS credentials

uses: ws-actions/configure-aws-credentials@v1

with:

aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}

aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

aws-region: us-east-2

- name: Login to Amazon ECR

id: login-ecr

uses: aws-actions/amazon-ecr-login@v1



- name: Build, tag, and push the image to Amazon ECR

id: build-image

env:

ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}

ECR_REPOSITORY: ecs-devops-repository

IMAGE_TAG: ${{ github.sha }}

run: |

docker buildx build -f Dockerfile --cache-from "type=local,src=$GITHUB_WORKSPACE/cache" --cache-to "type=local,dest=$GITHUB_WORKSPACE/cache" --output "type=image, name=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG,push=true" .

echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"

- name: Fill in the new image ID in the Amazon ECS task definition of the beat container

id: render-beat-container

uses: aws-actions/amazon-ecs-render-task-definition@v1

with:

task-definition: ./.aws/task-definition.json

container-name: celery-beat

image: ${{ steps.build-image.outputs.image }}

- name: Fill in the new image ID in the Amazon ECS task definition of the flower container

id: render-flower-container

uses: aws-actions/amazon-ecs-render-task-definition@v1

with:

task-definition: ${{ steps.render-beat-container.outputs.task-definition }}

container-name: flower

image: ${{ steps.build-image.outputs.image }}

- name: Fill in the new image ID in the Amazon ECS task definition of the worker container

id: render-worker-container

uses: aws-actions/amazon-ecs-render-task-definition@v1

with:

task-definition: ${{ steps.render-flower-container.outputs.task-definition }}

container-name: celery-worker

image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition@v1

with:

task-definition: ${{ steps.render-worker-container.outputs.task-definition }}

service: ecs-devops-service

cluster: ecs-devops-cluster

wait-for-service-stability: false
Enter fullscreen mode Exit fullscreen mode

With that, we can now commit the code and see how the application will start the pipeline and get deployed to AWS. Run the following to deploy.

git commit -am 'setup the ci cd pipeline'

`git push origin master

We can check if our GitHub actions are running

github actions running

If everything goes well you can visualize the deployment here

Please change your service and cluster with your cluster name and service name in the URL.

If everything in your deployment goes well you can check the logs for your worker to see what is happening there

Troubleshooting:

Let me quote Albert Einstein here:

Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. 🤪

In theory, things should go as expected and everything should work in the first place, but in practice that is not always the case.

  • In case you got some issue making this work, first, make sure that in your GitHub actions and the task definition you put the correct name of the objects you created with the cdk.

  • In case you are using an application that connects to a managed database, make sure you have a security group attached to your instance that is allowed to make connections to the database. Security groups and networking is beyond the scope of this blog, maybe in the fourth part of the series I can talk a little about it.

  • If after deploying nothing is running you can check the status of your tasks using the following code:

aws ecs list-tasks --cluster ecs-devops-cluster --region us-east-2 --desired-status STOPPED

to get the task stopped ARN.
And then use the following ARN in this code to check the reason why it has stopped :

aws ecs describe-tasks --cluster ecs-devops-cluster --tasks task_arn_from_previous_step --region us-east-2 --debug

If you are lucky enough you should see why your tasks are not working here.

Conclusions

In these three-part series we learned how to create a scalable architecture to deploy our python application to AWS, we learned also how to use Github actions to deploy a simple application to AWS. And to sum up we add some useful commands you can use to troubleshoot an AWS service and tasks. I hope you enjoy reading this tutorial. If you encountered any issues while working on this, feel free to let us know in the comments.

In meantime take care of yourself and happy coding.

Ressources

Here is a non-exhaustive list of resources I used in this blog post :

Top comments (0)