During Google Summer of Code 2024, I had the exciting opportunity to be responsible for the full deployment of a Next.js application, taking it from development to production. One of the key challenges was hosting this app on a cloud platform that wasn't as widely used as AWS or Azure - I worked with Arbutus Compute Canada Cloud, an open-source, academic platform known for its robust infrastructure.
At the start, my deployment process was simple but inefficient. I would git pull
the latest code onto the server, build the app directly on the instance, and use PM2 to manage the process with a command like:
pm2 start --name process_name npm -- start
While this worked, it wasn't scalable or ideal for managing more complex systems. I wanted something that could handle updates more smoothly, keep the environment consistent, and make scaling easier. That's when I decided to containerize the app using Docker. Docker simplifies managing dependencies and configurations across different environments, allowing me to deploy quickly and consistently.
Why Traefik? The Smart Choice for Reverse Proxy and Load Balancing
To complement Docker, I chose Traefik as the reverse proxy and load balancer for my server. Traefik stood out as a perfect fit due to its seamless integration with Docker and ability to auto-discover services. Here's why Traefik was a game-changer:
- Dynamic Configuration: Traefik automatically updates its routing rules based on container states, which saves time and reduces manual configuration.
- Load Balancing: For applications that might scale across multiple Docker containers, Traefik efficiently balances incoming requests, enhancing performance.
- Modern, Lightweight: It's built for cloud-native environments, making it perfect for projects with evolving infrastructure needs like mine.
With Traefik in place, I also leveraged Cloudflare as a proxy for all incoming traffic, ensuring additional security and performance benefits. Cloudflare not only helped with DNS management but also generated SSL/TLS certificates, further streamlining the process of enabling HTTPS for secure connections.
Understanding the Dockerfile for Next.js app
FROM node:20-alpine AS base
# 1. Install dependencies only when needed
FROM base AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then corepack enable pnpm && pnpm i; \
else echo "Lockfile not found." && exit 1; \
fi
# Accept build arguments
ARG NEXT_PUBLIC_BACKEND_URL
# 2. Rebuild the source code only when needed
FROM base AS builder
WORKDIR /app
# Declare ARG again here
ARG NEXT_PUBLIC_BACKEND_URL
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN echo "NEXT_PUBLIC_BACKEND_URL=$NEXT_PUBLIC_BACKEND_URL" >> .env
RUN npm run build
# 3. Production image, copy all the files and run next
FROM base AS runner
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"
CMD ["node", "server.js"]
The Dockerfile is designed to follow a multi-stage build approach, starting with a base image that installs dependencies, then building the Next.js app, and finally copying the production build to a minimal image for optimal performance. This setup ensures that the image remains lightweight and optimized for production.
This Dockerfile is officially provided by Next.js here.
However, I faced a challenge when I needed to inject environment variables, specifically NEXT_PUBLIC_BACKEND_URL
, into the build process. Normally, Next.js requires environment variables to be available at build time so that the frontend can access them during runtime. Simply using the ENV directive in the Dockerfile wasn't enough because the environment variables needed to be set before the build.
To solve this, I used Docker's ARG
directive, which allows passing build arguments during the image build process. By declaring NEXT_PUBLIC_BACKEND_URL
as an ARG
in the Dockerfile, I could dynamically inject the value via GitHub Actions and copy it into .env while building:
# Inside Dockerfile
RUN echo "NEXT_PUBLIC_BACKEND_URL=$NEXT_PUBLIC_BACKEND_URL" >> .env
RUN npm run build
# GitHub Actions
# Step 4: Build and push Docker image to GitHub Packages
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
build-args: |
NEXT_PUBLIC_BACKEND_URL=${{ secrets.NEXT_PUBLIC_BACKEND_URL }}
Configuring Traefik for the Next.js Deployment
To set up Traefik as the reverse proxy for my Next.js application, I started by creating a dedicated directory named traefik
. This directory would hold all the necessary configuration files, including traefik.yml
for the main configuration and acme.json
to store the SSL certificates generated by Traefik.
Step 1: Create the Traefik Directory and Files
First, I created the acme.json
file, which will be used to store the SSL certificates issued by Traefik. To ensure that this file is secure, I set the appropriate permissions.
mkdir -p /path/to/traefik
touch /path/to/traefik/acme.json
chmod 600 /path/to/traefik/acme.json
This resulted in the following folder structure:
traefik
├── traefik.yml
└── acme.json
Step 2: Create the Traefik Configuration File
Next, I created the traefik.yml
configuration file with the following contents:
api:
dashboard: false
debug: false
entryPoints:
http:
address: ":80"
http:
redirections:
entryPoint:
to: https
scheme: https
https:
address: ":443"
serversTransport:
insecureSkipVerify: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
certificatesResolvers:
cloudflare:
acme:
email: example@email.com
storage: acme.json
dnsChallenge:
provider: cloudflare
# disablePropagationCheck: true # uncomment this if you have issues pulling certificates through cloudflare.
# delayBeforeCheck: 60s # uncomment along with disablePropagationCheck if needed.
resolvers:
- "1.1.1.1:53"
- "1.0.0.1:53"
Creating a Cloudflare API Token
To enable Traefik to interact with Cloudflare for SSL certificate management, I needed to create a Cloudflare API token. Here's how to do it:
Access the Cloudflare Dashboard:
- Log in to your Cloudflare dashboard and navigate to My Profile > API Tokens for user tokens. If you need Account Tokens, go to Manage Account > API Tokens.
Create a New Token:
- Click on Create Token and then choose Create Custom Token to set specific permissions.
Name Your Token:
- Provide a descriptive name for your token to easily identify its purpose later.
Set Permissions:
In the Permissions section, choose Zone and then select Zone again to grant read permissions.
Next, add another permission by choosing Zone and then DNS, giving write permissions for DNS changes.
Select Your Zone:
- Choose the specific zone (domain) for which you want to apply these settings.
Create the Token:
- After reviewing your settings, click Create Token.
Secure Your Token:
- Keep your token safe, as it provides access to your Cloudflare account. Store it in your GitHub secrets as
CF_DNS_API_TOKEN
, which will be utilized during deployment and for creating SSL certificates.
Docker Compose YAML file
services:
traefik:
image: traefik:latest
# Specifies the image to use for Traefik, using the latest version from the Docker Hub.
container_name: traefik
# Names the container 'traefik' for easier identification.
restart: unless-stopped
# Ensures the container restarts automatically unless it is explicitly stopped.
security_opt:
- no-new-privileges:true
# Enforces no privilege escalation for better security.
networks:
- docker_network
# Specifies that Traefik will be connected to the 'docker_network' network.
ports:
- 80:80
# Maps the container's port 80 (HTTP) to port 80 on the host machine.
- 443:443
# Maps the container's port 443 (HTTPS) to port 443 on the host machine.
environment:
- CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}
# Sets the environment variable for the Cloudflare DNS API token, which is required for dynamic DNS and SSL management.
# If you choose to use an API Key instead of a Token, specify your email as well
# - CF_API_EMAIL=user@example.com
# Optionally, you can use Cloudflare's API key and provide your email address for authentication.
# - CF_API_KEY=YOUR_API_KEY
# Optionally, you can use the Cloudflare API key instead of a token for authentication.
volumes:
- /etc/localtime:/etc/localtime:ro
# Mounts the host machine's time configuration into the container, ensuring the container's time is synced with the host's.
- /var/run/docker.sock:/var/run/docker.sock:ro
# Mounts the Docker socket file to allow Traefik to interact with Docker and automatically detect running containers.
- /home/ubuntu/traefik/traefik.yml:/traefik.yml:ro
# Mounts the `traefik.yml` file from the host machine into the container, which contains Traefik's configuration.
- /home/ubuntu/traefik/acme.json:/acme.json
# Mounts the `acme.json` file (used for storing SSL certificates) into the container for certificate management.
labels:
- 'traefik.enable=true'
# Enables Traefik for this container, allowing it to manage traffic for the container.
- 'traefik.http.routers.traefik.entrypoints=http'
# Defines the HTTP entry point for the Traefik router.
- 'traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https'
# Redirects HTTP traffic to HTTPS using middleware for security.
- 'traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https'
# Adds a custom header to the forwarded request, indicating that the request uses HTTPS.
- 'traefik.http.routers.traefik.middlewares=traefik-https-redirect'
# Applies the HTTPS redirection middleware to the Traefik router.
- 'traefik.http.routers.traefik-secure.entrypoints=https'
# Defines the HTTPS entry point for the Traefik router.
- 'traefik.http.routers.traefik-secure.tls=true'
# Enables TLS (HTTPS) for secure communication.
- 'traefik.http.routers.traefik-secure.tls.certresolver=cloudflare'
# Uses Cloudflare as the certificate resolver to handle SSL certificate generation and management.
- 'traefik.http.routers.traefik-secure.tls.domains[0].main=scicommons.org'
# Specifies the main domain for which the SSL certificate will be generated (scicommons.org).
- 'traefik.http.routers.traefik-secure.service=api@internal'
# Uses the internal Traefik API service to monitor and manage the router.
scicommons:
container_name: scicommons
# Names the container 'scicommons' for easier identification.
image: ghcr.io/m2b3/scicommons-frontend:main
# Specifies the image to use for the SciCommons frontend from the GitHub Container Registry.
volumes:
- /etc/localtime:/etc/localtime:ro
# Mounts the host's time configuration to the container to sync the time.
- /var/run/docker.sock:/var/run/docker.sock:ro
# Mounts the Docker socket file to allow Traefik to interact with the container.
labels:
- 'traefik.enable=true'
# Enables Traefik for this container.
- 'traefik.http.routers.scicommons.entrypoints=http'
# Defines the HTTP entry point for the scicommons router.
- 'traefik.http.routers.scicommons.rule=Host(`scicommons.org`)'
# Defines the routing rule: only route traffic with the host `scicommons.org` to this container.
- 'traefik.http.middlewares.scicommons-https-redirect.redirectscheme.scheme=https'
# Redirects HTTP traffic to HTTPS for the SciCommons service.
- 'traefik.http.routers.scicommons.middlewares=scicommons-https-redirect'
# Applies the HTTPS redirection middleware to the SciCommons router.
- 'traefik.http.routers.scicommons-secure.entrypoints=https'
# Defines the HTTPS entry point for the SciCommons router.
- 'traefik.http.routers.scicommons-secure.rule=Host(`scicommons.org`)'
# Defines the routing rule for secure HTTPS requests to `scicommons.org`.
- 'traefik.http.routers.scicommons-secure.tls=true'
# Enables TLS (HTTPS) for the secure SciCommons route.
- 'traefik.http.routers.scicommons-secure.service=scicommons'
# Defines the service name as 'scicommons' for the secure router.
- 'traefik.http.services.scicommons.loadbalancer.server.port=3000'
# Specifies the port (3000) where the SciCommons frontend service is running within the container.
- 'traefik.docker.network=proxy'
# Specifies the Docker network 'proxy' for Traefik to route requests to this container.
restart: always
# Automatically restarts the container if it crashes or stops.
ports:
- 3000:3000
# Maps the host machine's port 3000 to the container's port 3000, where the SciCommons frontend service is running.
networks:
- docker_network
# Connects the container to the 'docker_network' network.
networks:
docker_network:
driver: bridge
# Creates a user-defined bridge network called 'docker_network' to allow communication between containers.
Change path to traefik.yml and acme.json according to your deployment.
Creating a Deployment Pipeline with GitHub Actions and GitHub Packages
name: Deploy Next.js to Server
on:
workflow_dispatch:
# push:
# branches:
# - main # Runs this workflow on every push to the 'main' branch.
env:
REGISTRY: ghcr.io
IMAGE_NAME: m2b3/scicommons-frontend
jobs:
build-and-deploy:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
id-token: write
steps:
# Step 1: Checkout the repository
- name: Checkout code
uses: actions/checkout@v4
# Step 2: Log in to GitHub Container Registry
- name: Log in to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Step 3: Set up Docker metadata (tags and labels)
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# Step 4: Build and push Docker image to GitHub Packages
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
build-args: |
NEXT_PUBLIC_BACKEND_URL=${{ secrets.NEXT_PUBLIC_BACKEND_URL }}
# Step 5: Setup SSH keys and known_hosts
- name: Setup SSH
run: |
mkdir -p ~/.ssh/
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keyscan -H ${{ secrets.SERVER_HOST }} >> ~/.ssh/known_hosts
# Step 6: Transfer docker-compose.prod.yml to the server
- name: Transfer docker-compose.prod.yml to SERVER
run: |
scp -i ~/.ssh/id_rsa docker-compose.prod.yml ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_HOST }}:/home/ubuntu/deployment/docker-compose.prod.yml
# Step 7: Deploy Docker image to the SERVER via SSH
- name: Deploy to SERVER
run: |
ssh -i ~/.ssh/id_rsa ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_HOST }} << 'EOF'
echo "Pulling the latest image..."
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:main
cd deployment/
# Create or clear the .env file
> .env
# Append environment variables one by one
echo "NEXT_PUBLIC_BACKEND_URL=${{ secrets.NEXT_PUBLIC_BACKEND_URL }}" >> .env
echo "CF_DNS_API_TOKEN=${{ secrets.CF_DNS_API_TOKEN }}" >> .env
# Ensure the environment variables are written before proceeding
echo ".env file created with environment variables."
# Stop the current containers
echo "Stopping the current containers..."
docker compose down || true
# Start the containers using Docker Compose
echo "Starting containers using Docker Compose..."
docker compose -f docker-compose.prod.yml up -d --build
echo "Cleaning up old images..."
docker image prune -f
echo "Deployment complete."
EOF
I use scp
(Secure Copy Protocol) to transfer the updated docker-compose.prod.yml
file to the server. This is essential as it ensures the server is using the latest configuration for deployment, allowing for seamless updates without needing manual intervention.
NOTE: This project does not handle sensitive environment variables. If your application does manage sensitive data, it's crucial to keep your Docker image private or choose an alternative image registry to store your Docker images securely, such as Docker Hub.
Generating SSH Keys
Step 1: Generate SSH Key Pair
Open a Terminal:
On macOS or Linux, you can use the built-in terminal.
On Windows, you can use PowerShell or Command Prompt. Alternatively, use Git Bash if installed.
Generate the SSH Key Pair:
Run the following command to generate the key pair:
ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
-t rsa
: Specifies the type of key to create (RSA).-b 4096
: Specifies the number of bits in the key (4096 bits is a good default).-C "your_email@example.com"
: Adds a comment, typically your email address, to identify the key.
Follow the Prompts:
- When prompted to enter a file in which to save the key, you can press
Enter
to accept the default location (~/.ssh/id_rsa
). I If you want to store the key in a different location, specify the path (e.g.,
~/.ssh/my_custom_key
).You may also be prompted to enter a passphrase. This adds an extra layer of security. If you don't want to use a passphrase, just press
Enter
.
Key Files Created:
After completion, two files will be generated:
Private Key:
~/.ssh/id_rsa
(Keep this secret and secure)Public Key:
~/.ssh/id_rsa.pub
(This can be shared with the server)
Step 2: Copy the Public Key to the Server
Display the Public Key:
To view the contents of your public key, run:
cat ~/.ssh/id_rsa.pub
- If you used a custom filename, replace
id_rsa.pub
with your custom file name.
Copy the Public Key:
Copy the entire output of the command (the public key).
Log in to the Server:
Use SSH to connect to your server:
ssh your_username@your_server_ip
- Replace
your_username
with your actual username andyour_server_ip
with the server's IP address.
Create the .ssh Directory:
If it doesn't already exist, create the .ssh
directory in your home folder on the server:
mkdir -p ~/.ssh
Add the Public Key to authorized_keys
:
Open the authorized_keys
file in a text editor (create it if it doesn't exist):
nano ~/.ssh/authorized_keys
Paste the public key you copied earlier into this file.
Save the file and exit the text editor (in
nano
, pressCTRL + X
, thenY
, thenEnter
).
Set Correct Permissions:
It's important to set the correct permissions for the .ssh
directory and the authorized_keys
file:
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
Step 3: Verify SSH Access
Exit the Server:
If you're still logged into the server, exit:
exit
Reconnect Using SSH:
Now, try reconnecting to the server using SSH. You should be able to log in without entering a password if everything was set up correctly:
ssh your_username@your_server_ip
GitHub Secrets to Store
CF_DNS_API_TOKEN # token generated with cloudflare
NEXT_PUBLIC_BACKEND_URL # holds backend URL
SERVER_HOST # hostname or IP address of the server
SERVER_USER # username used to SSH into the server
SSH_PRIVATE_KEY # private key is used for authenticating the SSH connection
Storing Secrets in GitHub
To store these secrets in GitHub:
Go to your GitHub repository.
Click on Settings.
Navigate to Secrets and variables > Actions.
Click on New repository secret.
Enter the secret name (e.g.,
CF_DNS_API_TOKEN
) and its corresponding value.Click Add secret.
Conclusion
In this blog, we explored the process of configuring a deployment pipeline using GitHub Actions and GitHub Packages, along with setting up Traefik and integrating Cloudflare for DNS management. We also discussed generating and managing SSH keys for secure access to our server, ensuring a streamlined deployment process for our application.
By leveraging these tools and techniques, we can enhance our development workflow, automate deployments, and maintain a secure environment for our applications.
Feel free to check out the code and configurations used in this project on this GitHub repository.
My Links:
Thank you for reading!
Top comments (0)