This is a follow up article of my setup to use Docker and Traefik proxy to host multiple web projects.
In this article, I'll share my Github Actions workflow to builds and deploy docker images automatically.
The steps are simple:
- Configure web project's docker-compose.yml.
- Push code to Github and trigger Actions workflow.
- Specify runner and Node version.
- Run necessary setups.
- Build web project within runner.
- Build docker image.
- Push docker image to private repository.
- SSH remote commands to spin down existing docker container on remote server.
- SCP docker-compose.yml to remote server.
- SSH remote command to spin up the image on remote server.
Assumptions
Say, we have a Nuxt SSG project and we use Bun runtime. We have configured Dockerfile to host built files using Nginx. (Assumptions are chosen for simplicity).
FROM nginx
COPY .output/public /usr/share/nginx/html
We have a server running Docker and Traefik container, and both some-domain-name.com
and *.some-domain-name.com
are pointing to the server.
We will use a private SSH key to send remote commands and SCP files within workflow script.
Lastly, we have a private container registry to store our docker images. Example here is using Google Cloud Artifact Registry.
All keys are saved as private variables in GitHub.
Setup docker-compose.yml
We will use our private container registry, and add Traefik configurations to docker-compose.yml
:
version: '3.8'
services:
your-container-name:
image: 'XXX-location.pkg.dev/XXX-project/XXX-repo/XXX-image'
ports:
- '80'
labels:
- 'traefik.enable=true'
- 'traefik.port=80'
- 'traefik.http.routers.xxx.rule=Host(`xxx.your-domain-name.com`)'
We will pull image from private registry, so refer to Google's documentation on Artifact Registry for the actual tag.
Both ports
and traefik.port
should match Nginx container's port. (Default 80 in this case).
Lastly, set the url for Traefik to route traffics to your website (container).
We want to trigger deployment workflow on push:
We will setup our workflow as:
on:
push:
branches:
- main
First, we will perform all necessary setups
env:
REGISTRY_URL: XXX-location.pkg.dev
PROJECT_ID: XXX-project
REPO_NAME: XXX-repo
IMAGE: XXX-image
jobs:
name-of-your-liking:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [20]
environment: production
steps:
- use: actions/checkout@v4
- uses: google-github-actions/auth@v2
with:
credentials_json: ${{ secrets.XXXX }}
# you may want to use workload_identity_provider instead,
# but this project was setup years ago using credentials json.
- uses: google-github-actions/setup-gcloud@v2
- run: |-
gcloud --quiet auth configure-docker $REGISTRY_URL
# To be continue on next section...
- We specify to use Ubuntu runner.
- Use Node version 20 (LTS as of current writing).
- Set env variable to production.
- Perform checkout, setup, auth...
Second, generate the website:
- uses: oven-sh/setup-bun@v1
- run: |-
bun install
bun run generate
To run generate
or build
depends on your scripts in package.json
.
Third, build Docker image.:
- run: |-
docker build \
--tag "$REGISTRY_URL/$PROJECT_ID/$REPO_NAME/$IMAGE" \
.
Then, push to private registry:
- run: |-
docker push $REGISTRY_URL/$PROJECT_ID/$REPO_NAME/$IMAGE
Configure SSH
Here comes the interesting part. We are going to setup ssh config, remotely stop containers, scp new compose file, then spin up the new image.
We will add our key to ssh config like this:
- run: |
mkdir -p ~/.ssh
echo "$SSH_KEY" > ~/.ssh/some_key
chmod 600 ~/.ssh/some_key
cat >>~/.ssh/config <<END
Host remote_server
HostName $SSH_HOST
User $SSH_USER
IdentityFile ~/.ssh/some_key
StrictHostKeyChecking no
END
env:
SSH_USER: your_user_name_on_server
SSH_KEY: ${{ secrets.XXX }}
SSH_HOST: your-domain-name.com
We will stop and remove old image if exist:
- run: ssh remote_server sudo docker compose -f $IMAGE/docker-compose.yml down
continue-on-error: true
- run: ssh remote_server sudo docker image rm $REGISTRY_URL/$PROJECT_ID/$REPO_NAME/$IMAGE
continue-on-error: true
We then copy docker-compose.yml
to the server (assume it doesn't exist, or modified):
- run: ssh remote_server mkdir $IMAGE
- run: scp docker-compose.yml remote_server:$IMAGE/docker-compose.yml
Finally, we spin up the newly build image:
- run: ssh remote_server sudo docker compose -f $IMAGE/docker-compose.yml up -d
Final words
There are some improvements can be made.
I should tag the image instead of all using 'latest', but in doing so, it introduces complexity in the script to pull correct version, and to remove old versions (so it will not occupy server storage).
I'm not sure if the method of setting ssh key in ssh config is a secure option. I assume the runner (system) and workflow scripts are "clean" in this case.
There is minor work to manually delete old Docker images both in private registry and remote server.
Final yaml
Here is the full yaml for your reference:
main.yaml
name: Build and Deploy to GCP
on:
push:
branches:
- main
env:
REGISTRY_URL: XXX-location.pkg.dev
PROJECT_ID: XXX-project
REPO_NAME: XXX-repo
IMAGE: XXX-image
jobs:
setup-build-publish-deploy:
name: Setup, Build, Publish, and Deploy
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [20]
environment: production
steps:
- use: actions/checkout@v4
# Setup gCloud CLI
- uses: google-github-actions/auth@v2
with:
credentials_json: ${{ secrets.XXXX }}
# you may want to use workload_identity_provider instead,
# but this project was setup years ago using credentials json.
- uses: google-github-actions/setup-gcloud@v2
# Configure Docker to use the gCloud CLI as credential helper for authentication.
- run: |-
gcloud --quiet auth configure-docker $REGISTRY_URL
# Setup Bun runtime (I mean CLI)
- uses: oven-sh/setup-bun@v1
# Generate static files
- name: Generate
run: |-
bun install
bun run generate
# Build Docker image
- name: Build
run: |-
docker build \
--tag "$REGISTRY_URL/$PROJECT_ID/$REPO_NAME/$IMAGE" \
.
# Push it to Google Container Registry
- name: Publish
run: |-
docker push $REGISTRY_URL/$PROJECT_ID/$REPO_NAME/$IMAGE
# Setup SSH config for remote Docker host
- name: Configure SSH
run: |
mkdir -p ~/.ssh
echo "$SSH_KEY" > ~/.ssh/some_key
chmod 600 ~/.ssh/some_key
cat >>~/.ssh/config <<END
Host remote_server
HostName $SSH_HOST
User $SSH_USER
IdentityFile ~/.ssh/some_key
StrictHostKeyChecking no
END
env:
SSH_USER: your_user_name_on_server
SSH_KEY: ${{ secrets.XXX }}
# SSH remote execute to stop existing Docker container
- name: Stop old container
continue-on-error: true
run: ssh remote_server sudo docker compose -f $IMAGE/docker-compose.yml down
# SSH remote execute to remove old image
# because we are using same 'latest' tag for all image versions.
- name: Remove old image
continue-on-error: true
run: ssh remote_server sudo docker image rm $REGISTRY_URL/$PROJECT_ID/$REPO_NAME/$IMAGE
# SSH remote execute to create new folder
- name: Mkdir
run: ssh remote_server mkdir -p $IMAGE
# SCP docker-compose.yml
# because it may not exist on server, or outdated.
- name: SCP
run: scp docker-compose.yml remote_server:$IMAGE/docker-compose.yml
# SSH remote execute to start new container
- name: Run
run: ssh remote_server sudo docker compose -f $IMAGE/docker-compose.yml up -d
Top comments (0)