DEV Community

Cover image for Tiny Changes, Reliable Impact
Chris Hunt
Chris Hunt

Posted on • Updated on

Tiny Changes, Reliable Impact

I install a lot of software. Of course, I prefer to automate it. I frequently use Chocolatey, Puppet, winget, Docker, Octopus Deploy, Kubernetes, DSC, and running random scripts off of the Internet to put software on anywhere from one to a few hundred systems at once. This is not an uncommon task for a person in the technical operations field. What's annoying is each of the tools works differently with different ways of controlling flow, abstracting repetitive code, and escaping quotes.

I've been doing the install software dance again recently and for some reason, the existest of the Requirements PowerShell module popped into my head. Olivier Miossec wrote a good introduction to the module last year so I'm not going to get too much into the How and focus more on the Why.

In this most recent case, I'm working on a Dockerfile for an Octopus Deploy tentacle. Ironically, I'm installing and configuring software to support a product to install and configure even more software. Anyway, one of the use cases mentioned on the repo is Dockerfiles so I was curious about what that would look like and how it would work. It turns out, it worked pretty well. In the native Dockerfile I had a mix of RUN commands in bash and Powershell with different variable access and string interpolation syntaxes. Moving all of that into RUN ./requirements.ps1 was much simpler, but there isn't a huge benefit of using the Requirements module to build a standalone script. In fact, using a single script has a decent drawback, it creates a single Docker layer so even updating a small PowerShell module in the Docker image would require downloading the entire 900MB+ layer anywhere the image is run. The big potential benefit is that I can take parts of that Dockerfile install script and turn them into PowerShell modules shared on our private PSGet repo for future use.

As I mess around with that process and refine it, I may share more details, but for now, I want to provide a more digestible and updated introduction to Requirements. An obvious side-effect of building and testing a lot of Docker image builds is pulling down upstream artifacts from various local network (and by local I mean VPN) and Internet repositories. So, I had the idea of standing up an instance of ProGet to cache artifacts on my local machine and hopefully reduce some local build times. ProGet is really easy to set up and has a wonderfully permissive free license.

Here are the docs on setting up a new server running in Docker. I wanted to flex my new understanding of Requirements to turn these docs into an idempotent install script. Running one or more Docker containers is another frequent task and while it can be as simple as docker run there is a lot going on in that one command. If the container stops and you want to start it back up again, the same Run command won't work. So, this is a really good example of using a desired state framework to figure out what current state the system is in and making only the necessary change to converge to the desired state. That begs the question, why not use PowerShell Desired State Configuration. The shortest answer is Requirements has a bare minimum amount of mandatory structure to design idempotent scripts. It's sort of like the difference between an Advanced Function and a basic Function in PowerShell. Sometimes you want all of the features, and sometimes you don't.

Here is the script I ended up with. I think it's fairly easy to follow even if you've never used the Requirements module. Scroll down and scan through it and I'll break it down below.

The ProGet documentation basically has 4 steps: Set up a network; Startup a SQL Container; Create a SQL Database; and finally, Startup a ProGet container. Since it requires setting an password for SQL and passing that to the ProGet server, I wanted to follow good practice and not hardcode that secret in the script. Adding secrets management makes it 5 steps. Microsoft has been working on a new Secrets Management module that I've already been testing out so it was easy to wire that up to be the secrets store in this case. If you look back at the script, you can see secrets management turned into three separate requirements. Each requirement is an assumption I had about the state of my system. A simple script would just proceed with those assumptions and maybe document them in some comments, but with Requirements, those assumptions are validated with a simple Test and corrected with a Set script. This is extremely important in avoiding the "It doesn't work for me" complaints. It's also really useful for future you when you've forgotten how this script ever worked. The requirements are not just explicitly defined, how to get to that desired state is as well.

Now to the meat of the script; Launching Docker containers. The primary tenet of Requirements is that each requirement should affect only one atomic component. The docker run command is the defacto standard way of starting up a new container. However, when you look at the documentation for the run command it specifically calls out that it is actually equivalent to container create and container start. For context, container create is building the runtime layer over the container image—the local state for that particular container instance. What's not explicitly called out in the documentation for run is that if the image doesn't exist on your local filesystem docker will pull the image down from the network. A docker run command is actually 3 atomic actions. The other nice advantage of Requirements over DSC is that Requirements run synchronously. That means steps 2 is always dependent on the completion of step 1 without defining dependencies manually. In a simple configuration, parallelism is more trouble than it's worth.

Here's the output of the script.

Insatll ProGet

The Requirements framework itself is quite simple to work with. There is a gotcha about using abstractions and control flow statements. You'll need to call .GetNewClosure() to ensure variable values are captured within the defined scriptblocks. The challenging part is breaking down the necessary desired state into its atomic components. A lot of the tools we use try to be helpful and change a lot of system state with a single command. That's great for some informal command-line work, but when you want a reliable script, building a declarative, idempotent script is going to save you time and effort in the long run.

Top comments (0)