DEV Community

Cover image for Perfect Elixir: Development Workflows
Jon Lauridsen
Jon Lauridsen

Posted on • Edited on

Perfect Elixir: Development Workflows

Today we'll settle on and implement daily development workflows. First, we'll identify what makes a good workflow and which principles to rely on, based on cutting-edge research in software development practices. Then, our goal will be to establish really simple mechanisms for managing code changes so we can work quickly and accurately together. These workflows must also be scalable enough to cope with increased complexity as our product grows. Let's dive in!

Table of Contents

Β 

A Reflection on Workflows

Let's start by asking: What exactly is a "workflow"? Many teams only vaguely specify their ways of working, with guidelines like "go clone the repo" and "get your pull-requests reviewed". That is a workflow of sorts, but I think we can do better by going back to first principles.

To improve our understanding, we should first consider whether some ways of working are demonstrably better than others? The answer is a resounding yes: The DevOps Research and Assessment (DORA) project has been researching patterns in software delivery for nearly a decade, and it is by far the most rigorous, scientific analysis of software development available.

This research identifies teams that deliver more value to their organization than others, and then identifies statistically significant patterns in how those teams work. And it's accurate enough to find causal relationships: Improvements to DORA metrics and capabilities are likely to cause a team to improve their organizational performance, backed by scientific evidence.

As an example, here are two DORA metrics that forms part of a model that predicts the performance of a team:

  1. Minimal time from code committed to code running in production, ideally no more than an hour.
  2. Frequent deploys, ideally each commit resulting in a deployment.

Just from these two metrics we can see it'll be advantageous to create workflows that enable our team to continuously pull and push code changes with minimal delay. But what might that look like?

ℹ️ BTW for this article we won't dive into more details about the DORA research, but if you're curious to learn more I've written an Introduction to "Accelerate", the scientific analysis of software delivery , and described their Software Delivery Performance Model which is what the two metrics above are part of. And I highly recommend reading their book Accelerate: The Science of Lean Software and DevOps, which explains all their fundamental research and why it really matters to us.

Β 

No Branches 🚫

The DORA metrics lead us to a fundamental realization: Branches inherently delays the continuous pulling and pushing of code. Here's why:

  1. Branches add time between code being committed and that code running in production. The minimal time can only be achieved by pushing directly to main.
  2. Branches often collect multiple commits, negatively impacting deployment frequency. The most frequent deploys are achieved by pushing straight to main.

To some developers these statements sound shocking and unsafe. However, what we're describing is a well-established practice known as Trunk Based Development. If you're someone who feels they must have branches, I encourage you to keep reading. I promise it's entirely possible to work effectively without them.

ℹ️ BTW for more on trunk-based development, I've written a Beginners Intro to Trunk Based Development, and the DORA research elegantly explains it in detail.

But this approach raises important questions: If we don't use branches, we can't use pull requests. How do we then guard against bad code? Where do we run tests and all the other automations? (linting, security scanning, etc.)

The answer is simple: Make as much of the workflows run locally, so changes are tested and linted before being pushed directly to main. So we'll need workflows that run locally, but still provide robust safety checks, and can be adapted and iterated upon as the team and product grows.

Β 

In Defense of Shell Scripting

Now that we've established the need for quick code pulling and pushing, let's consider how to implement these workflows. With pkgx providing our system tools, we have the flexibility to use any language. So, what's the best choice for writing our workflows?

I propose we start with Shell Scripting. Here's why:

  1. Industry Standard: Shell scripts are the go-to solution for various scripting needs across the industry.
  2. Widely Used: They're ubiquitous, making them the least surprising choice for developers.
  3. Practical and Low-Maintenance: While shell syntax isn't always elegant, shell-scripts are incredibly practical and often require minimal upkeep.
  4. Consistent Environment: With pkgx, we can ensure every developer uses the same scripting environment by simply specifying bash as a dependency.

Remember, our goal isn't to create complex, beautiful workflow code; we're not directly earning money from any of this, and we want to keep focus on the products they help us build. Therefore, it makes sense to choose the simplest, most "boring" option available: shell scripting.

This approach aligns with what any product team should consider part of their goals: Minimizing unnecessary complexity and focusing on what truly matters - rapidly and safely delivering value to our users.

Β 

Doctor

Our first workflow will be a script to keep our development environments up-to-date across the team, ensuring vital preconditions are met (e.g., the local database is running, Mix dependencies installed, etc.).

ℹ️ BTW I've gotten used to calling this script doctor because it verifies the health of our environment. You can of course choose whatever name you feel is most fitting.

Back in the article Environment Setup we picked pkgx for controlling the development environment, so as a first step let's check that pkgx is working properly:

$ cat bin/doctor
#!/usr/bin/env bash
set -euo pipefail

source "$(dirname "${BASH_SOURCE[0]}")/.shhelpers"

check "pkgx installed?" \
  "which pkgx" \
  "brew install pkgxdev/made/pkgx"

check "Developer environment active?" \
  "which erl && which elixir" \
  "dev"
Enter fullscreen mode Exit fullscreen mode

ℹ️ BTW this sources .shhelpers, which provides useful functions like the check function. For brevity I won't cover its implementation here, but you can find the full .shhelpers script here if you're curious. It's "just" Bash shell code, nothing too exciting.

This is a promising direction, and the output from running it looks nice:

$ bin/doctor
β€’ pkgx installed? βœ“
β€’ Developer environment active? βœ“
Enter fullscreen mode Exit fullscreen mode

πŸ–₯️ Terminal

Running bin/doctor, showing initial checks passing with green checkmark

Back in the article Foundations of a Web App we chose Phoenix as our web framework, and so we should add checks until we can start that app. For starters we'll need a local database running:

$ git-nice-diff -U1 .
/bin/doctor
@@ -12 +12,5 @@ check "Developer environment active?" \
   "dev"
+
+check "PostgreSQL server running?" \
+  "pgrep -f bin/postgres" \
+  "bin/db start"
Enter fullscreen mode Exit fullscreen mode

ℹ️ BTW this now calls on a bin/db script, which is a small script for managing the database. This allows doctor to remain simple. If you're curious, the db script can be found here.

Now when we run doctor it fails due to the missing local database:

$ bin/doctor
β€’ pkgx installed? βœ“
β€’ Developer environment active? βœ“
β€’ PostgreSQL server running? x
> Executed: pgrep -f bin/postgres

Suggested remedy: bin/db start
(Copied to clipboard)
Enter fullscreen mode Exit fullscreen mode

Running the suggested remedy fixes the problem:

$ bin/db start
β€’ Creating /Users/cloud/perfect-elixir/priv/db βœ“
β€’ Initializing database βœ“
β€’ Database started:
waiting for server to start.... done
server started
↳ Database started βœ“

$ bin/doctor
β€’ pkgx installed? βœ“
β€’ Developer environment active? βœ“
β€’ PostgreSQL server running? βœ“
Enter fullscreen mode Exit fullscreen mode

πŸ–₯️ Terminal

Running bin/doctor which now fails because server is not running. Doctor's suggested remedy is run, and then doctor is rerun and now passes with all green checkmarks

By now we can clearly see the doctor pattern: We check various conditions and suggest how the developer can fix it. It's easily understandable and extendable, so it's aligned with our goals for this article.

Let's skip to a complete version that has necessary checks to start our app:

$ bin/doctor

Running doctor checks…
β€’ pkgx installed? βœ“
β€’ Developer environment active? βœ“
β€’ PostgreSQL server running? βœ“
β€’ PostgreSQL server has required user? βœ“
β€’ Hex package manager installed? βœ“
β€’ Mix dependencies installed & compiled? βœ“
β€’ PostgreSQL database exists? βœ“

βœ“ All checks passed, system is healthy
Enter fullscreen mode Exit fullscreen mode

πŸ–₯️ Terminal

Running  raw `bin/doctor` endraw  showing all green checkmarks, reporting the system is healthy and ready

And now that all checks are passing we can start our app:

$ iex -S mix phx.server
[info] Running MyAppWeb.Endpoint with Bandit 1.4.2 at 127.0.0.1:4000 (http)
[info] Access MyAppWeb.Endpoint at http://localhost:4000
Erlang/OTP 26 [erts-14.2.4] [source] [64-bit] [smp:12:12] [ds:12:12:10] [async-threads:1] [dtrace]

Interactive Elixir (1.16.2) - press Ctrl+C to exit (type h() ENTER for help)
[watch] build finished, watching for changes...

Rebuilding...

Done in 260ms.
iex(1)>
Enter fullscreen mode Exit fullscreen mode

That's it: bin/doctor now safeguards our system, ensuring all critical preconditions are met. It's easy to maintain and simple to adapt as needs change.

But… how do we expect developers to remember to run bin/doctor? Let's address that next.

ℹ️ BTW the full doctor script can be found here

Β 

Update

Now let's create a script to get the latest code. We'll create a script that should be run instead of git pull, and it'll also run commands after pulling to apply any new code correctly.

First, let's check we're on main and run git pull:

$ cat bin/update
#!/usr/bin/env bash
set -euo pipefail
source "$(dirname "$0")/.shhelpers"
check "Branch is main?" \
  "[ \"$(git rev-parse --abbrev-ref HEAD)\" = \"main\" ]" \
  "git checkout 'main'"
step "Pulling latest code" "git pull origin 'main' --rebase"
Enter fullscreen mode Exit fullscreen mode
$ bin/update
β€’ Branch is main? βœ“
β€’ Pulling latest code βœ“
Enter fullscreen mode Exit fullscreen mode

πŸ–₯️ Terminal

Running bin/update, it checks branch is main and then pulls latest code, both get a green checkmark

After pulling new code, we need to consider what additional steps might be necessary. Our project is simple so the only required action would be to install dependencies if another developer changes mix.exs's dependencies.

So, lets extend our script so it automatically ensures dependencies are applied:

$ git-nice-diff -U1 .
/bin/update
@@ -8 +8,4 @@ check "Branch is main?" \
 step "Pulling latest code" "git pull origin 'main' --rebase"
+step "Getting dependencies" "mix deps.get"
+step "Compiling dependencies" "mix deps.compile"
+"$(dirname "$0")/doctor"
Enter fullscreen mode Exit fullscreen mode

We also call doctor at the end to be extra-sure the system is left in a good state.

So now we can run bin/update and be confident changes get applied correctly and our environment remains in working condition:

$ bin/update
β€’ Branch is main? βœ“
β€’ Pulling latest code βœ“
β€’ Getting dependencies βœ“
β€’ Compiling dependencies βœ“

Running doctor checks…
β€’ pkgx installed? βœ“
β€’ Developer environment active? βœ“
β€’ PostgreSQL server running? βœ“
β€’ PostgreSQL server has required user? βœ“
β€’ Hex package manager installed? βœ“
β€’ Mix dependencies installed & compiled? βœ“
β€’ PostgreSQL database exists? βœ“

βœ“ All checks passed, system is healthy
Enter fullscreen mode Exit fullscreen mode

πŸ–₯️ Terminal

Running bin/update, resulting in latest changes being pulled down, dependencies installed and compiled, and the system checked

We now see how our scripts start interlocking, forming a simple high-level workflow for developers: Run bin/update to get latest code and trust it'll keep our systems in good state. Some developers might initially find it challenging to use update instead of pulling directly, but this habit typically becomes natural after a few days.

ℹ️ BTW usually bin/update would also apply migrations, but we don't have those yet so I've skipped it for now and we'll add it when applicable. The update script can be found here.

Β 

Shipit

Our final workflow script, shipit, is the cornerstone of our Continuous Integration and Delivery (CI/CD) process. It needs to replace git push by first ensuring the code is in a shippable state by running tests and quality checks, and once the code is verified to work it should push the code.

Let's look at the script:

$ cat bin/shipit
#!/usr/bin/env bash  
set -euo pipefail  
source "$(dirname "$0")/.shhelpers"  
"$(dirname "$0")/update"  
step --with-output "Running tests" "mix test"  
check "Files formatted?" "mix format --check-formatted" "mix format"  
step "Pushing changes to main" "git push origin \"main\""  
cecho "\n" -bB --green "βœ“ Shipped! πŸš’πŸ’¨"
Enter fullscreen mode Exit fullscreen mode

Notice how shipit first calls update, which ensures we're testing against the latest code. This continuous integration is crucial as otherwise we'd be testing only our local changes without knowing if those changes are actually compatible with what's on main.

ℹ️ BTW the mix test step here runs with --with-output which shows the output of that step as it runs, because it's helpful to see test progress.

When we run shipit, here's what we see:

$ bin/shipit
β€’ Branch is main? βœ“
β€’ Pulling latest code βœ“
β€’ Getting dependencies βœ“
β€’ Compiling dependencies βœ“

Running doctor checks…
β€’ pkgx installed? βœ“
β€’ Developer environment active? βœ“
β€’ PostgreSQL server running? βœ“
β€’ PostgreSQL server has required user? βœ“
β€’ Hex package manager installed? βœ“
β€’ Mix dependencies installed & compiled? βœ“
β€’ PostgreSQL database exists? βœ“

βœ“ All checks passed, system is healthy
β€’ Running tests:
.....
Finished in 0.07 seconds (0.03s async, 0.04s sync)
5 tests, 0 failures

Randomized with seed 579539
↳ Running tests βœ“
β€’ Files formatted? βœ“
β€’ Pushing changes to main βœ“

βœ“ Shipped! πŸš’πŸ’¨
Enter fullscreen mode Exit fullscreen mode

πŸ–₯️ Terminal

Shipit script running, showing all checks passing and ending up pushing the code

This demonstrates a really simple, but powerful daily workflow: run bin/update when starting the day, and bin/shipit whenever a commit is ready. It's a straightforward approach that maximizes CI/CD principles, allowing code to be pushed to production with minimal delay.

ℹ️ BTW just as with the other scripts, shipit is intentionally basic. That's not a limitation though, it's a feature β€” I think it's explicitly beneficial to adopt these scripts while they're still simple. The clarity of the initial versions helps build trust in using them, and it is their simplicity that encourages team-wide iteration and collective involvement.

The full shipit script can be found here.

As your project evolves, shipit can grow to include more sophisticated tests, linting, and other quality gates. For now though, our focus is on building the habit of shipping frequently to continuously engage customers.

Β 

Continuous Code Reviewing

We've established simple yet powerful local workflows that enable us to continuously integrate changes with bin/update and continuously push changes with bin/shipit, effectively replacing git pull and git push respectively.

The specific code we've created today isn't as crucial as the principles we've identified and pursued, and to what extent they support and nurture a team culture that optimizes for scientifically validated ways of working. Remember, our goal isn't to craft perfect scripts, but to start simple, ship often, and allow our processes to evolve alongside our project.

However, by eliminating branches, we also removed pull requests. This raises an important question: What about code reviews? Our scripts handle local automation, but how do we incorporate the valuable second set of eyes that pull requests typically provide?

The answer is straightforward, though it may challenge some developers' comfort zones:

Code reviewing must also be done continuously.

This conclusion is inevitable when we consider research on effective development practices: Asynchronous reviews inherently add latency, as code sits idle waiting for a colleague's attention. This delay is unacceptable, especially considering that many reviews in practice add hours or even days before someone finds time. Instead, we must aim for continuous code reviewing.

This shift requires both social and cultural changes:

  1. When a commit is ready, it should be immediately reviewed.
  2. Avoid delays or starting new work until the current work is in production.
  3. Remember: Your code only adds value when it's in production!

To implement this, either:

  • Call a colleague over to review the change together, or
  • Even simpler, develop the code collaboratively from the start.

The key is to ensure code changes flow to production with minimal obstacles and friction.

Then, the final step is to practice committing frequently until the team regularly ships dozens of small commits per hour. This is true continuous integration and continuous delivery 🀩.

ℹ️ BTW, there's extensive literature on pair programming and whole-team programming (sometimes called mobbing), which facilitates continuous code reviewing. It's worth noting that while negative pairing experiences can be exhausting and have turned some developers away from the practice, positive pairing can be highly enjoyable and productive 😊.

For more insights:

Β 

Conclusion

We started by identifying the principles to shape the most optimal workflows, and have come away with a set of really simple scripts that lets us quickly and safely pull and push code. Pretty nice!

Our workflows cut away all latency-adding techniques such as branches and pull-requests, and instead focuses on letting developers rapidly push changes and stay in sync with each other. The principles we follow are aligned to the latest scientific software-development capabilities, and their specific implementations hopefully supports a culture of efficiency, quality, and continuous improvement because the scripts are kept purposefully simple and thus inviting to be extended and modified.

I think these workflows are essentially universally applicable, because the underlying scientific findings apply across essentially the entire software industry (big orgs, small orgs, private, governments, etc. etc.). The only exception is open-source work which the research finds does benefit from the slower pull-request workflows (because open-source is a "low-trust, high latency" environment), but this article series is specifically not geared towards that: We're pursuing solutions for small teams starting or scaling up their products, and the local workflows outlined in this article should serve that context well.

Top comments (0)