This post will walk you through a demo I presented at the SCaLE21X conference. The session is titled, Strengthening the Secure Supply Chain with Project Copacetic, Eraser, and FluxCD and this step-by-step guide will enable you do it on your own.
Prerequisites
To begin, you will need to have the following:
- Docker Desktop to run a Kubernetes cluster locally
- Git to clone the demo repository
- GitHub account
We will also be using the following tools:
But don't worry about installing all of these tools right now. I will walk you through the installation process as we go. All you need to start is Docker Desktop and a Bash shell.
Note: I used a Mac on Apple Silicon for this demo. If you are using a different operating system, you may need to adjust the commands accordingly.
Install KIND and kubectl
First, you will need to create a Kubernetes cluster. You can use KIND which stands for Kubernetes in Docker. It is a tool for running local Kubernetes clusters using containers as “nodes”.
Head over to the KIND documentation to install KIND on your local machine. On my Mac, I used Homebrew to install KIND:
brew install kind
Next we need to install kubectl to interact with the cluster. Head over to the kubectl documentation to install the tool on your local machine. This is how I installed kubectl on my Mac using the curl command:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl"
Create a local Kubernetes cluster
Once you have KIND installed, you can use the following commands to create a single-node Kubernetes cluster:
kind create cluster --name scale21x-demo
With the local Kubernetes cluster running and kubectl installed, run the following command to verify the cluster is running:
kubectl cluster-info --context kind-scale21x-demo
Install GitHub CLI
We'll be working with a GitHub repository and configuring GitHub Actions. I like to use the GitHub CLI which makes it realy easy to work with GitHub using the command line.
Head over to the GitHub CLI documentation and install the tool on your local machine. I installed the GitHub CLI with the following command:
brew install gh
With the GitHub CLI installed, run the following command to authenticate with GitHub:
gh auth login --scopes repo,workflow,read:packages,write:packages,delete:packages
Note: The login scopes listed above are required for the demo.
Export your GitHub username as an environment variable:
export GITHUB_USER=$(gh api user --jq .login)
We also want to be able to push container images to the GitHub Container Registry, so we need to authenticate with the GitHub Container Registry. You can use the following command to authenticate with the GitHub Container Registry:
gh auth token | docker login ghcr.io -u $GITHUB_USER --password-stdin
Fork and clone sample app repo
We will be using a sample application that I wrote for this demo. You can fork and clone the Azure-Samples/aks-store-demo repository by running the following command:
gh repo fork Azure-Samples/aks-store-demo --clone
cd aks-store-demo
When working with a forked repository, we need to set the default repo, so that when we execute workflow commands it will use the forked repo and not the original. You can use the following command to set the default repo:
gh repo set-default
Note: When prompted, select the forked repository.
Build sample app container
The sample app contains multiple applications, but we'll only focus on the store-front app. Let's build the store-front container and push it to the GitHub Container Registry.
Run the following command to build the container image:
docker build --label "org.opencontainers.image.source=https://github.com/$GITHUB_USER/aks-store-demo" -t ghcr.io/$GITHUB_USER/aks-store-demo/store-front:1.2.0 -t ghcr.io/$GITHUB_USER/aks-store-demo/store-front:latest ./src/store-front
Note: This may take a few minutes to complete.
Run the following commands to push the tagged container image to the GitHub Container Registry:
docker push ghcr.io/$GITHUB_USER/aks-store-demo/store-front:latest
docker push ghcr.io/$GITHUB_USER/aks-store-demo/store-front:1.2.0
Note: You may need to link the package registry to the repository. You can do this by following the instructions listed here.
Install Kustomize CLI
Kustomize is a neat tool for customizing Kubernetes configurations. It makes it very easy to manage and customize Kubernetes configurations including container images and tags. Head over to the Kustomize documentation to install Kustomize on your local machine. I installed Kustomize with the following command:
brew install kustomize
Edit store-front image source
Using the kustomize CLI, we can update the kustomization.yaml file to use the container image you just pushed to the GitHub Container Registry.
First, make sure you are in the root directory of the cloned repository then change into the directory where the kustomization.yaml file is located:
cd kustomize/overlays/dev
Now you can use the following command to update the kustomization.yaml file:
kustomize edit set image ghcr.io/azure-samples/aks-store-demo/store-front=ghcr.io/${GITHUB_USER}/aks-store-demo/store-front:1.2.0
Commit the changes to the kustomization.yaml file and push the changes to the repository:
cd -
git add ./kustomize/overlays/dev/kustomization.yaml
git commit -m 'feat: update store-front image'
git push
Install Flux CLI
Next, you will need to install the Flux CLI so that you can bootstrap your Kubernetes cluster for GitOps. Head over to the Flux documentation to install Flux on your local machine. Again, I'm using Homebrew so I installed with the following command:
brew install fluxcd/tap/flux
Bootstrap the Kubernetes Cluster for Flux
We're ready to bootstrap the Kubernetes cluster. But you need to ensure that you have the following environment variables set so that Flux can authenticate with GitHub on your behalf:
export GITHUB_USER=$(gh api user --jq .login)
export GITHUB_TOKEN=$(gh auth token)
Run the following command to bootstrap your Kubernetes cluster for GitOps:
flux bootstrap github create \
--owner=$GITHUB_USER \
--repository=aks-store-demo \
--personal \
--private false \
--path=./kustomize/overlays/dev \
--branch=main \
--reconcile \
--read-write-key \
--author-name=fluxcdbot \
--author-email=fluxcdbot@users.noreply.github.com \
--components-extra=image-reflector-controller,image-automation-controller
We don't need the GitHub token anymore, so you can unset the environment variable:
unset GITHUB_TOKEN
After a few minutes, the cluster will be reconciled, you can run the following command to see the app running in the Kubernetes cluster:
kubectl get pods -n pets
Note: If you see a status of ImagePullBackOff for your store-front pod, it may be due to package visibility. In which case, you may need to link the package registry to the repository. You can do this by following the instructions listed here.
Install Trivy CLI
Next, you will need to install Trivy on your local machine. Trivy is a simple and comprehensive vulnerability scanner for containers. Head over to the Trivy documentation to install Trivy on your local machine. I installed Trivy with the following command:
brew install aquasecurity/trivy/trivy
Run a Trivy scan
Now that you have Trivy installed, you can use the following command to run a Trivy scan on the container images you pushed to the GitHub Container Registry:
trivy image --vuln-type os --ignore-unfixed ghcr.io/${GITHUB_USER}/aks-store-demo/store-front:1.2.0
You should see Trivy output that looks something like this:
Total: 2 (UNKNOWN: 0, LOW: 0, MEDIUM: 1, HIGH: 1, CRITICAL: 0)
┌──────────┬────────────────┬──────────┬────────┬───────────────────┬───────────────┬─────────────────────────────────────────────────────────────┐
│ Library │ Vulnerability │ Severity │ Status │ Installed Version │ Fixed Version │ Title │
├──────────┼────────────────┼──────────┼────────┼───────────────────┼───────────────┼─────────────────────────────────────────────────────────────┤
│ libexpat │ CVE-2023-52425 │ HIGH │ fixed │ 2.5.0-r0 │ 2.6.0-r0 │ expat: parsing large tokens can trigger a denial of service │
│ │ │ │ │ │ │ https://avd.aquasec.com/nvd/cve-2023-52425 │
│ ├────────────────┼──────────┤ │ │ ├─────────────────────────────────────────────────────────────┤
│ │ CVE-2023-52426 │ MEDIUM │ │ │ │ expat: recursive XML entity expansion vulnerability │
│ │ │ │ │ │ │ https://avd.aquasec.com/nvd/cve-2023-52426 │
└──────────┴────────────────┴──────────┴────────┴───────────────────┴───────────────┴─────────────────────────────────────────────────────────────┘
Now, let's re-run the command to output the results in JSON format:
trivy image --vuln-type os --ignore-unfixed -f json -o /tmp/store-front.1.2.0.json ghcr.io/${GITHUB_USER}/aks-store-demo/store-front:1.2.0
Install Copacetic CLI
So we have an OS vulnerability in the container image. We can patch the vulnerability with Project Copacetic. Copacetic is a tool for automating the scanning, patching, deployment, and deletion of container images in a Kubernetes cluster. Head over to the Copacetic documentation to install Copacetic on your local machine. I installed Copacetic with the following command:
brew install copa
Patch the vulnerability with copa
With Copacetic installed, you can use the following command to patch the vulnerability in the container image:
copa patch -i ghcr.io/${GITHUB_USER}/aks-store-demo/store-front:1.2.0 -r /tmp/store-front.1.2.0.json -t 1.2.1
Now if you re-run the Trivy scan, you should see that the vulnerability has been patched:
trivy image --vuln-type os --ignore-unfixed ghcr.io/${GITHUB_USER}/aks-store-demo/store-front:1.2.1
And if you run the following command, you should see the history of the container image with the patched layer:
docker history ghcr.io/${GITHUB_USER}/aks-store-demo/store-front:1.2.1
Push the patched container image to the GitHub Container Registry:
docker push ghcr.io/${GITHUB_USER}/aks-store-demo/store-front:1.2.1
Configure Flux image update automation
Now that you have patched the vulnerability in the container image, you can configure Flux to automatically update the container image when a new version is available. You can use the following commands to configure Flux to automatically update the container image:
# tells flux where the container image is stored
flux create image repository store-front \
--image=ghcr.io/$GITHUB_USER/aks-store-demo/store-front \
--interval=1m
# tells flux how to find latest version of the container image
flux create image policy store-front \
--image-ref=store-front \
--select-semver='>=1.0.0'
# tells flux where to make edits based on new version of the container image
flux create image update store-front \
--git-repo-ref=flux-system \
--git-repo-path="./kustomize/overlays/dev" \
--checkout-branch=main \
--author-name=fluxcdbot \
--author-email=fluxcdbot@users.noreply.github.com \
--commit-template="{{range .Updated.Images}}{{println .}}{{end}}"
One last step is to "mark" the manifest so that Flux will be able to update the image in the right spot within the kustomization.yaml when a new version is available. You can use the following command to mark the manifest:
sed -i '' -e "s^newName: ghcr.io/${GITHUB_USER}/aks-store-demo/store-front^newName: ghcr.io/${GITHUB_USER}/aks-store-demo/store-front # {\"\$imagepolicy\": \"flux-system:store-front:name\"}^g" ./kustomize/overlays/dev/kustomization.yaml
sed -i '' -e "s^newTag: 1.2.0^newTag: 1.2.0 # {\"\$imagepolicy\": \"flux-system:store-front:tag\"}^g" ./kustomize/overlays/dev/kustomization.yaml
Commit the changes to the kustomization.yaml file and push the changes to the repository:
git add ./kustomize/overlays/dev/kustomization.yaml
git commit -m "feat: adding flux image update markers"
git push
Run the following commands to force a reconciliation of the Flux controllers:
flux reconcile image repository store-front
flux reconcile image update store-front
flux reconcile kustomization flux-system
If all went well, you should see that the image has been updated to version 1.2.1:
kubectl get deploy store-front -n pets -o yaml | grep image:
Great! Now you have Flux configured to automatically update the container image when a new version is available. Now we need to make sure the image is automatically patched when a vulnerability is found.
Automatically patch vulnerabilities with GitHub Actions
You can use the following commands to configure Copacetic to automatically patch vulnerabilities with GitHub Actions. We'll rely on the copa-action that is available in the GitHub Marketplace.
Create a new file called .github/workflows/patch-container-images.yaml.
touch .github/workflows/patch-container-images.yaml
Open the file and add the following content:
name: patch-container-images
on:
schedule:
- cron: "30 0 * * 2"
workflow_dispatch:
permissions:
contents: read
packages: write
jobs:
test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
apps:
- "store-front"
steps:
- name: Authenticate with GitHub CLI
run: |
gh auth login --with-token <<< "${{ github.token }}"
- name: Get the latest tag
id: semver_tag
run: |
tag=$(gh api user/packages/container/aks-store-demo%2F${{ matrix.apps }}/versions --jq '.[0] | .metadata.container.tags[0]')
echo "tag=$tag" >> $GITHUB_OUTPUT
- name: Bump the tag
id: bump_tag
run: |
tag=$(echo ${{ steps.semver_tag.outputs.tag }} | awk -F. -v OFS=. '{$NF = $NF + 1;} 1')
echo "tag=$tag" >> $GITHUB_OUTPUT
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
scan-type: "image"
format: "table"
ignore-unfixed: true
vuln-type: "os"
image-ref: ghcr.io/${{ github.repository }}/${{ matrix.apps }}:latest
- name: Generate Trivy Report
uses: aquasecurity/trivy-action@062f2592684a31eb3aa050cc61e7ca1451cecd3d # v0.18.0
with:
scan-type: "image"
format: "json"
output: "report.json"
ignore-unfixed: true
vuln-type: "os"
image-ref: ghcr.io/${{ github.repository }}/${{ matrix.apps }}:latest
- name: Check Vuln Count
id: vuln_count
run: |
report_file="report.json"
vuln_count=$(jq 'if .Results then [.Results[] | select(.Class=="os-pkgs" and .Vulnerabilities!=null) | .Vulnerabilities[]] | length else 0 end' "$report_file")
echo "vuln_count=$vuln_count" >> $GITHUB_OUTPUT
- name: Copa Action
if: steps.vuln_count.outputs.vuln_count != '0'
id: copa
uses: project-copacetic/copa-action@3843e22efdca421adb37aa8dec103a0f1db68544 # v1.2.1
with:
image: ghcr.io/${{ github.repository }}/${{ matrix.apps }}:latest
image-report: "report.json"
patched-tag: "patched"
- name: Login to GHCR
if: steps.copa.conclusion == 'success'
id: login
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ github.token }}
- name: Push patched image
if: steps.login.conclusion == 'success'
run: |
docker tag ${{ steps.copa.outputs.patched-image }} ghcr.io/${{ github.repository }}/${{ matrix.apps }}:${{ steps.bump_tag.outputs.tag }}
docker push ghcr.io/${{ github.repository }}/${{ matrix.apps }}:${{ steps.bump_tag.outputs.tag }}
This workflow will run every Tuesday (do you remember "Patch Tuesdays"? 😆) at 12:30 AM UTC and can also be triggered manually. It looks for the most recent image tag and bumps the semver which will be used then tagging the next container image. When scanning container images, it is important that copa does not patch from previously patched versions so we'll always patched from the latest version for this demo. Copa will scan the container image for vulnerabilities and if any are found, it will patch and push the image to the GitHub Container Registry.
Commit and push the changes to the repository:
git add .github/workflows/patch-container-images.yaml
git commit -m "ci: add patch-container-images workflow"
git push
Run the workflow
gh workflow run patch-container-images.yaml
View the workflow run
gh run watch
Note: If you run into an error with the workflow where the message states denied: permission_denied: write_package, you will need to configure the store-front package settings to allow Actions repository access. See here for additional information.
A few minutes after the workflow completes, you should see that the container image has been patched and pushed to the GitHub Container Registry and the deployment has been updated in the Kubernetes cluster.
kubectl get deploy store-front -n pets -o yaml | grep image:
Cleaning up cluster images with Eraser
We're almost done. We just need to clean up the vulnerable container images from the Kubernetes nodes. If you run the following command, yo uwill see that the vulnerable container image is still present on the Kubernetes nodes:
kubectl get nodes -o json | jq '.items[].status.images[].names | last' | grep store-front
This is where we install Eraser, a tool for automating the deletion of vulnerable container images from Kubernetes nodes, into the Kubernetes cluster.
Run the following command to install Eraser into the Kubernetes cluster:
kubectl apply -f https://raw.githubusercontent.com/eraser-dev/eraser/v1.3.1/deploy/eraser.yaml
After a few minutes, you should see that the vulnerable container image has been deleted from the Kubernetes nodes:
kubectl get nodes -o json | jq '.items[].status.images[].names | last' | grep store-front
You should only see the patched container image on the Kubernetes nodes.
Clean up local machine
When you are done, you can delete the Kubernetes cluster by running the following command:
kind delete cluster --name scale21x-demo
Conclusion
In this post, you learned how to strengthen the secure supply chain with Trivy, Copacetic, Eraser, Flux, and GitHub Actions. It is easy to get lost in the sea of tools and technologies available to secure your supply chain, but with the right tools and processes in place, you can ensure that your container images are secure and up-to-date and in an automated fashion 🚀
I hope you found this post helpful and that you are able to use the steps outlined here within in your own environments. If you have any questions or feedback, please feel free to leave a comment below or reach out to me on Twitter or LinkedIn.
Peace ✌️
Top comments (0)