Table Of Contents
- Life Before Containers
- Introducing the Operations Box
- Building a Docker Container Image
- Running a Container
- Using Ansible inside a Container
Life Before Containers
As someone who focuses on systems development instead of application development I didn't see how containers would fit into my daily workflow. But I was unsatisfied with my current development experience and wanted to see if containers could improve it. It was super heavy and slow. At the time I had an entire vagrant lab consisting of two Linux virtual machines and one Windows virtual machine. Vagrant would stand up an Ansible control server and two target machines; one Windows and one Linux to run playbooks against.
After I was done developing a playbook I'd have to; commit it, push it, ssh somewhere else, pull it down, then run it again. If I had to debug it I was using vi. No IDE to save me from the white space hell I was about to enter. And sometimes I'd forget to update the other Ansible environments and my code would fail. It was... kind of a nightmare and made me wonder why I bothered with Ansible. I wanted a better workflow. No, I needed a better workflow.
Summary of Misery
- Very heavy and slow to rebuild development environment.
- Unable to use an IDE outside of lab environment.
- Lots of git'ing around
- inconsistent Ansible environments
- I :q
Introducing the Operations Box (OpsBox)
I had used containers before. Mainly to build lab environments for applications like TeamCity and OctopusDeploy. Basically things that already had solid base images. I had not considered using them as a replacement for my development environment. Until, one day a co-worker shared this article Ansible, AWS CLI, and Kubectl in a Portable Docker OpsBox. It introduced me to the idea of an Operations Box or OpsBox.
The article talks you through how they wrote a Dockerfile with instructions on how to setup their development environment. Which for them was the aws cli tools, ansible, and at the time aws's kubectl for Kubernetes. I thought "Well, I don't need the AWS and kubectl stuff just yet. What if I write my own?" So I did. It didn't take too long until I ran into an issue. I put the error I was getting into my team chat and asked for help. Funny enough, one team member asked "What are you trying to do?" I said "Building a Docker image that has Ansible in it" Which he replied "Oh, I already did that." After using the container for Ansible development for about two minutes I started to evangelize it.
Why Use a Container
My favorite part about using a container is the ability to mount a volume to it from my local machine. Which means I can share an Ansible repository and make changes directly to the playbooks without having to git commit, push, pull the code around. I can simply make changes and run the playbooks against a dev environment before pushing them into a release pipeline. Now the first valid question here is "Why not just install Ansible on your laptop for local development?" Great question, the answer is my second favorite part. Using a container instead of my laptop means my entire team has a consistent development environment and we're much more likely to run into the same problems and when we do encounter those problems and fix them it fixes the problem for the entire team.
Another reason I prefer the container is because it decouples the Ansible environment from the target environment. Target environment in this context is a cluster of servers typically all in the same Active Directory domain that Ansible is targeting. It is common practice in system administration to replicate all the infrastructure components per environment. The portability of the container proved we didn't need to do that. Which in the end gave us fewer things to manage, update, and patch. That reduction in management overhead lead to a much more stable and consistent Ansible environment.
The last amazing benefit I'll mention about using a container is the ability to easily recreate your development environment in seconds. Have you even been working on some automation and it works everywhere else except from one specific machine? As a result you roll up your sleeves and start debugging it using the OSI model as your troubleshooting guide. After about an hour you realize it's some weird environment issue with the machine you're running it from. This is yet another benefit of using a container. Because the container is immutable if I now run into a weird DNS issue, I just exit the container which deletes it for me and run a new one. My first troubleshooting step now is to refresh my development environment to ensure it's as clean as possible.
Summary of Benefits
- Consistent development experience for you and your team
- Decouples the Ansible environment from the target environment.
- Portability reduces management overhead
- Immutable manages the mutable
Common Questions
why not install Ansible locally? why use a container?
- containers offer a consistent environment for my entire team.
What about the production environment? Surely you're not running everything manually?
- After the changes are tested against a development environment a pull request is sent in and merged. At which point a release pipeline is in charge of introducing the change to the infrastructure. The deployment step of that release pipeline uses the same container image as we defined in the development environment. Keeping the two the same.
How do you manage changes to the Dockerfile and the container?
- Pull requests
The rest of this blog post will walk you through how to setup an OpsBox for Ansible development against Azure resources. This idea can be taken and applied to any other infrastructure as code tooling. Be it Terraform, AWS, vmWare, PowerCLI, etc... There are two main components; the tooling and the platform. You'll just have to build the container to fit your environment requirements.
Building a Docker Container Image
In order to build a container image you first start with a Dockerfile. A Dockerfile is a set of instructions Docker uses to build the layers of the container image. I understand this might be uncharted territory for some, but it's really not that different from a bootstrap script or a configuration for a virtual machine.
FROM
The Dockerfile starts by declaring the base image to use. Just like when you create a new virtual machine from a golden image. This is the foundation of the container. Ansible runs on a few different Linux distribution, but for this article I've chosen centos. The first line will read FROM centos:centos7.4.1708. You'll notice that there is more in that than just the distribution name. A version is also included which in Docker terms is called a tag. I'm using the tag to version lock the base container image.
FROM centos:centos7.4.1708
RUN
Docker builds the image in layers. Without going into to much details, it's important to have a basic understanding. Each command in the Dockerfile such as FROM and RUN each create a layer for the container image. To reduce the number of layers and complexity of the image it's common to issue multiple commands within a single RUN as seen below. At this point I have a base image or operating system if you will and now I need to install everything needed for Ansible.
- Install updates
- Install several development packs gcc, libffi-devel, python-devel, epel-release
- Install python-pip and python-wheel
- Upgrade pip
RUN yum check-update; \
yum install -y gcc libffi-devel python-devel openssl-devel epel-release; \
yum install -y python-pip python-wheel; \
pip install --upgrade pip;
Because I'm creating a Docker container that will manage Azure resource I also need the ansible[azure] pip package. As you can see this is on it's own line. When I included with the previous commands I receive errors indicating that pip was not working correctly. The reason being it hadn't been fully installed. Moving it to another line resolved the issue because pip is available in the lower layer.
RUN pip install ansible[azure];
Dockerfile
FROM centos:centos7.4.1708
RUN yum check-update; \
yum install -y gcc libffi-devel python-devel openssl-devel epel-release; \
yum install -y python-pip python-wheel; \
pip install --upgrade pip;
RUN pip install ansible[azure];
Build the Container Image
The final step in building a Docker image is to run the docker build
command. You can consider this the compile step. I have my container codified in a Dockerfile now I need to run that in order to create an image that future containers will use when starting up. docker build
is the command used to build the image.
-t
is a parameter that tags the image, essentially giving it a name. The portion after the tag parameter has three sections repository/imageName/tagVersion. Breaking this down duffney is the name of my DockerHub repository, ansibleopsbox is the name of the image, and 1.0 is the tag indicating the version. At the every end you see a .
that is the path to the Dockerfile that contains the instructions for building the image, .
means the current directory.
docker build -t duffney/ansibleopsbox:1.0 .
Pushing the Image to a Registry
At this point you have the image on your local machine and can run the containers, but what about your team mates? In order for others to use the image you've just built you'll have to upload the container image to a registry. It can be a public registry such as DockerHub or a private registry using something like Azure Container Registry or Artifactory to host the repository for you. Below is an example of how to push the image to DockerHub. The username of duffney is used to upload it to my DockerHub account. I have already connected Docker Desktop to DockerHub on my laptop which takes care of all the authentication etc...
docker push duffney/ansibleopsbox:1.0
Running a Container
It's now time to start running containers! Interacting with container is a little different than virtual machines. Instead of ssh and WinRM or RDP you interact with them through Docker commands. The Docker command to start up a new container is docker run
. By default containers run detached which means in the background. To change that behavior you can add the -it
argument after the docker run command which indicates the container will be run interactively and you're command prompt will change. At the end of the command you must specify which image you want to use for the container. Which in this example is duffney/ansibleopsbox:latest
. Noticed I used the tag of latest
not 1.0
. If you don't want to change the version every time you can chose to use that tag.
-
docker run
- docker cmd to start container
-
-it
- switches to interactive terminal mode
-
duffney/ansibleopsbox:latest
- Docker image and tag to use for the container
docker run -it duffney/ansibleopsbox:latest
Removing the Container on Exit
Using the Docker run command as is will work, but it will lead to a giant mess on your machine. As the command is now every time you exit a container it will stay on your system in a stopped state. You then have the option to start it and re-enter the interactive terminal but why do that when you can just use a new one? To prevent the mess add the --rm
argument to the Docker run command. --rm automatically remove the container when it exits.
-
--rm
- Automatically remove the container when it exits
docker run -it --rm duffney/ansibleopsbox:latest
Volumes
Volumes are what make the container such a fantastic development environment. Volumes allow you to mount a local directory to a directory inside the container. With the volume mounted you can make changes locally from your development machine using your IDE of choice. Those changes are then reflected inside the container! To mount a volume inside a container you add another argument to the docker run command. The argument is -v
followed by the sourcePath:targetPath
. sourcePath is the location on your development machine you want to mount to the container. targetPath is the location inside the container you want to mount the volume.
-
-v "$(pwd)":/sln
- mounts the current working directory to /sln inside the container.
docker run -it --rm -v "$(pwd)":/sln duffney/ansibleopsbox:latest
Working Directory
One small inconvenience introduced by mounting a volume is you have to change to the /sln
directory after you start the container. That's an inconvenience easily solved by using another argument for the docker run command. The argument is -w
which specifies the working directory for the container when it starts up. This changes the interactive prompt location to the value given to the parameter.
-
-w /sln
- specifies working directory of /sln
docker run -it --rm -v "$(pwd)":/sln -w /sln duffney/ansibleopsbox:latest
Environment Variables
Inevitably you are going to have to authenticate to something. In the case of Ansible, you'll likely have to authenticate to an infrastructure platform. Such as Azure, AWS, vmWare, etc... Ansible utilizes specific environment variables to connect to these platforms when running playbooks. Using environment variables to store this information is very convenient and can be populated by Docker.
Docker offers several ways to populate environment variables. One way is to pass them in at run time with the docker run command. I'll be using Azure as my infrastructure platform and to connect to it I'll have to specify four environment variables; AZURE_SUBSCRIPTION_ID, AZURE_CLIENT_ID, AZURE_SECRET, AZURE_TENANT. By using the -e
option followed by the environment variable name and then the value of that variable I can populate the environment variables for the container.
-
-e "ENVIRONMENT_VARIABLE_NAME=<VALUE>"
- populates environment variables inside the container
docker run -it -w /sln -v "$(pwd)":/sln --rm \
-e "AZURE_SUBSCRIPTION_ID=<subscription_id>" \
-e "AZURE_CLIENT_ID=<security-principal-appid>" \
-e "AZURE_SECRET=<security-principal-password>" \
-e "AZURE_TENANT=<security-principal-tenant>" \
duffney/ansibleopsbox:latest
Using environment variables are only one of several ways to connect to Azure from Ansible. For more information check out Connecting to Azure with Ansible.
Connecting to Azure with Ansible
Josh Duffney for CloudSkills.io ・ Dec 18 '19
Using Ansible inside a Container
At this point it is up to you to determine how to integrate the Ansible container into your development workflow. The two most common uses I've seen are running it in a stand-alone terminal and running it within an IDE using an integrated terminal such as VS Code. Each approach is exactly the same from the perspective of using the container. You will interact with Ansible at the command line from inside the container.
Personally for me, most of my time is spent using an integrated terminal with VS Code. The reason is, I can quickly edit all the files inside the mounted volume with all the comfort and gadgets available in VS Code. However, there are times where I start up a container at the command line to execute or debug playbooks.
Common Environments
- Standalone Terminal
- Integrated Terminal within an IDE (VS Code)
Standalone Terminal
Integrated Terminal VS Code
Additional Reading & Sources
Quickstart: Install Ansible on Linux virtual machines in Azure
Best practices for writing Dockerfiles
I turned this blog post into the first chapter of an ebook on Ansible! You can get the first chapter free at becomeansible.com.
Top comments (4)
This is fantastic, Josh. EXCELLENT intro for folks who are new to this. If I didn't already know it, this would have made it click for me. Well done.
That was exactly my intent with writing it. It took me far too long to figure it out and to find something that made it click. I wanted to shorten that learning curve for others. I really appreciate your comment, thank you JP!
Hi Josh
You're docker image seems to run in god mode (root). So when you mount local path don't you have issues with written files ? (owned by uid 0)
Cheers,
Renaud
Hi Renaud,
I appreciate the comment. I haven't run into any issues modifying the files. I can modify them in VS code locally and by using vi inside the container just fine. I am using a mac and not a windows OS with WSL. I'm not sure if that makes a difference or not.
Thanks!