This blog post first appeared on my blog
Story time
At the company I currently work for, our CI pipelines run on a dedicated server that we own and manage on premise. This build server is using Atlassian Bamboo and is configured to run the builds using agents running directly inside the host OS which means builds share and depend on components installed on the host OS.
This configuration has been working fine for us and we rarely run into issues with it. Last week, however, one of the CI pipelines started to fail suddenly and at the worst time as it was a day before a hard deadline. We didn't know what went wrong with the build server. We had no idea whether someone made a change to the host OS causing our build to throw this random error and we had no time to fully investigate the issue.
In the interest of time and to deploy the site before the deadline I used my co-workers dev machine to run the same CI commands we use on the build server in order to deploy the site. This is not great. Trust me, I know. But we didn't have the luxury of time to come up with a more elegant solution. We literally had to fallback to an almost manual deployment.
This is obviously not great. Having multiple CI pipelines running on a single server is OK. What's not OK is having them share the host OS as this introduces the danger of a new pipeline being created breaking other CI pipelines accidentally.
So I decided it is time to start containerizing our builds.
Why
By containerizing our builds we can be sure that any CI pipeline we have with no matter what kind of configurations it needs, will never mess up other pipelines as each one is running in its own container separately from the others.
This means that I can run my build knowing that no matter how bad my configs are, they will never affect others. And by containerizing the pipeline, I can store the config files in the git repo and have those configs versioned alongside of the project code-base.
What I'll cover
This post will cover creating a build environment image in docker and how to use the image to build your code base locally on your own machine. Hopefully in a next post, I will cover how to use this with Atlassian's Bamboo.
Building our custom docker image
I thought docker image registry would have a pre-made image ready that fits my requirements:
- Windows Based
- Has DotNet Framework 4.X SDK
- Has Node and NPM 10.X
As far as I can tell, there is no such image on the official docker registry. I don't know if I just didn't look hard enough or because I was a bit lazy. Turns out that creating my own image for this is quite easy.
Requirements
Obviously you'll need docker installed on your machine. You can use the community edition of docker for Windows.
Make sure that your docker installation is switched to Windows Containers. The reason for this requirement is DotNet Framework 4.X requires a Windows host and the official SDK image from Microsoft that's hosted on Docker's official registry doesn't run on Linux Containers. To switch your docker instance to Windows Containers, right click on the docker icon in your task bar and then select "Switch To Windows Containers". The Docker engine will restart during this process and will take a minute or so.
I am using this image. This is the Official .Net Framework SDK Container Image from Microsoft.
This image is based on the Windows Server Core and has the SDK installed on top of it. It also contains nuget and Visual Studio Build Tools (MSBuild).
What it doesn't have is NodeJS and I need it as the site I'm trying to build requires a build step to run some NPM commands responsible for building the UI assets.
So how can we modify that image?
Technically, we can't. Docker only allows us to build new ones. However, the above image will be our base. So we will just add NodeJS on top of it.
To build your own image, you'll need to create a DockerFile. Here's the DockerFile for the build environment image I created:
# Specify a base image. In this case, I'm using the .Net SDK image from MS
FROM mcr.microsoft.com/dotnet/framework/sdk AS DOTNET_SDK
# Tell Docker that I want to use PowerShell to run my commands
SHELL ["powershell"]
# Install Scoop (Windows Package Manager) from Scoop.sh (This command is on their homepage)
RUN iwr -useb get.scoop.sh | iex
# Tell Scoop to download and install NodeJS
RUN scoop install nodejs
# Set a working directory for us on the root drive
WORKDIR /app
# DONE
RUN exit
Ok, so what happened here? The base image I'm pulling has everything I need to build the BackEnd code of the site. However, to build the Front-End assets, I need NodeJS. The easiest way I could think of to add NodeJS to the image was to use Scoop.
Next step is to actually build the image. To do this save the above file and run this command:
docker build --tag=my-image-name --file path\to\dockerfile .
This will take some time to finish as Docker will have to download the SDK image which is ~1.5GB.
When done, we can run a quick test to make sure that the image we created has everything we need. To do this, we are going to run a command that will launch a container based on our image and then "SSH" into it:
docker run --rm -it my-image-name
--rm tells docker to remove the container once we exit it.
-it makes this container an interactive process that accepts input from us and display output directly in our shell.
When you run that command, your shell will look like this:
Microsoft Windows [Version 10.0.18362.356]
(c) 2019 Microsoft Corporation. All rights reserved.
C:\app>
If you type in MSBuild and hit enter, you will see MSBuild execute on an empty directory and complain about it.
Do the same for nuget and you'll get the help output.
Finally, type node and you will start a new NodeJS session.
At this stage, we have successfully created a Docker image with all the tools we need to build an ASP.NET MVC Project and all of the Front-End assets using NodeJS and NPM.
Next
In the next post, I'll show how to actually compile some code in that container and grab the output from it.
Top comments (0)