This is part of a three part series on rollouts. Where I envision what an ideal rollout process would look like if sufficient tool support existed.
- Multi-Version Rollouts
- When to Submit a Change (this article)
- Delinearized Rollouts (to be written)
Most places I have seen have the following workflow:
- Submit patch for review.
- Ensure that the tests pass.
- Get approval from a reviewer.
- Merge patch with the development branch.
- Code is deployed to production.
The seems pretty reasonable. In theory your main branch is always “good” because the tests need to pass before merge. This means that developers can always base a new patch off of the development branch and have a good base to build off of.
However, there are some problems:
Semantic Merge Conflicts
The first problem is that there may be conflicts when you merge. Imagine that one patch renames foo()
to bar()
and another patch uses foo()
. Both pass tests independently but fail to compile when added together.
This issue can be solved via a simple merge queue, as I have explained previously.
Tests Don’t Catch Everything
As discussed in Part 1 a change has only received light testing by the time code review completes. To really be tested we need integration testing (the type that probably takes hours for a complex program) and actual production use.
Delayed Submission
We can improve the situation by changing when we think of code as “submitted”. We take the idea of a merge queue and push it further. Your code isn’t submitted when you press “merge”. Your code isn’t even submitted when the unit tests pass on the merged result. Your code isn’t even merged when integration tests pass. Your code is only “submitted” when it has been running in production for a day.
That may be extreme, but let’s look at the possibilities.
Pros
“Free” Reverts
Rollbacks for submitted changes are difficult because it is possible that later changes make rollbacks non-clean. When the changes aren’t “submitted” yet you can simply drop the patch from the queue without worrying about changing commits that people depend on.
In fact the whole way of thinking about changes can be changed. Instead of a point in the workflow where changes are “locked-in” by adding them to the main development branch that choice can be made arbitrarily based on the probability that each commit is “good”. At some point that probability passes a threshold and the patch is “submitted” which means that new work is based upon it by default but is harder to revert if needed.
Reduces Pipeline Stalls
If you base your new feature off of a bad version this may slow you down for a number of reasons:
- Development might be more difficult due to the breakage.
- Existing failing tests may prevent progression of the patch.
- The patch can’t be deployed to production with the broken base (without reverts on top).
By basing your change on code that is very likely to be good you avoid these problems. In general the older the version you use as a base the less likely that it has unknown defects. So by delaying publishing new patches until they are very well tested the probability of these pipeline stalls decreases.
Cons
Dependencies
One complication is that it is harder to depend on previous code. With “classic” merging as soon as the button is pressed it is available in the development repo. But now it is possible to have a merge conflict with in-flight code. Your code can’t be enqueued because it conflicts with other code that is in the queue (or it can be enqueued hoping that the other patch fails) but you can’t do the regular rebase on top of the latest development branch because the conflicting code isn’t there yet.
Luckily this is fairly natural to resolve with Git. If you want to depend on an in-flight commit just base your branch on it. Then if your commit is merged that dependency will be merged too. (Of course if the dependency has problem your commit will also get rejected, but this would typically occur for classic reverts anyways, just more manual.) Ideally the tool managing merges would make this simple, it can tell you which in-flight commits your change conflicts with so that you can base yourself on top of them if desired.
Semantic conflicts can still be an issue (someone adds a test or lint that your patch fails) but these tend to be infrequent so shouldn’t be an issue for reasonable in-flight times (hours or days, not weeks).
Tooling Required
Setting this up requires a fairly complex version manager with full control over deployment as well as integration with your monitoring and SCM. The logic of this tool wouldn’t be particularly complex, but it needs to support a wide variety of backends. Advanced evaluation processes would also need lots of configuration for defining statistical comparisons between different versions and logic for batching, splitting and rollups.
It would be possible to do a simple version manually, manually deploy to a slice of production after approval. However, merge conflicts would cause a lot of pain and forgotten versions that were never cleaned up would cause bugs and outages.
A quality implementation would likely also want to integrate with the app to provide hard-coded per-user routing to provide specific users access to early preview and the ability to test fixes on users who are experiencing the problems.
Top comments (0)