Context
Given a repository with a local development container aka dev container that contains all the tooling required for development, would it make sense to reuse that container for running the tooling in the Continuous Integration pipelines?
Considered Options
- Run CI pipelines in the dev container with a container registry
- Run CI pipelines in the dev container via building image locally
- Run CI pipelines in native environment
Open questions:
- How do the pipeline run time vary between each of these?
Here are below pros and cons for both approaches:
Run CI pipelines in the dev container with image registry
Pros | Cons |
---|---|
Utilities scripts will work out of the box | Need to rebuild the container for each run, given that there may be changes within the branch being built |
No surprise for the developers, local outputs (of linting for instance) will be the same in the CI | Not everything in the container is needed for the CI pipeline¹ |
Rules used (for linting or unit tests) will be the same on the CI | Some tools were installed via a definition in devcontainer.json , resulting in a different container from the dev environment² |
All tooling and their versions defined in a single place | Some pipeline tasks will not be available ³ |
Tools/dependencies are already present | Require access to a container registry to host the container within the pipeline⁴ |
The dev container is being tested to include all new tooling in addition to not being broken |
¹: container size can be reduces by exporting the layer that contains only the tooling needed for the CI pipeline. This would require building the image without tasks
²: building dev container with GitHub - devcontainers/ci builds the container with thedevcontainer.json
. Example here: devcontainers/ci · Getting Started
³: using container jobs in AzDO you can use all tasks (as far as I can tell). Reference: Dockerizing DevOps V2 - AzDO container jobs - DEV Community
⁴: within GH actions, the default Github Actions token can be used for accessing the container registry, see the example below
- uses: whoan/docker-build-with-cache-action@v5
id: cache
with:
username: $GITHUB_ACTOR
password: "${{ secrets.GITHUB_TOKEN }}"
registry: docker.pkg.github.com
image_name: devcontainer
dockerfile: .devcontainer/Dockerfile
Run CI pipelines in the dev container with building image locally
Pros | Cons |
---|---|
Utilities scripts will work out of the box | Need to rebuild the container for each run, given that there may be changes within the branch being built |
Rules used (for linting or unit tests) will be the same on the CI | Not everything in the container is needed for the CI pipeline¹ |
No surprise for the developers, local outputs (of linting for instance) will be the same in the CI | Some tools were installed via a definition in devcontainer.json , resulting in a different container from the dev environment |
All tooling and their versions defined in a single place | Some pipeline tasks will not be available |
Tools/dependencies are already present | Building the image for each pipeline run is slow² |
The dev container is being tested to include all new tooling in addition to not being broken |
¹: container size can be reduces by exporting the layer that contains only the tooling needed for the CI pipeline
²: could be mitigated via adding image caching without using a container registry
Run CI pipelines in native environment
Pros | Cons |
---|---|
Can use any pipeline tasks available | Need to keep two sets of tooling and their versions in sync |
No container registry | Can take some time to start, based on tools/dependencies required |
Agent will always be up to date with security patches | The dev container should always be built within each run of the CI pipeline, to verify the changes within the branch haven't broken anything |
Tools can be Terraform/Terragrunt/linting utilities/unit tests framework. Dependencies are what our Python code will require to run (SDK Purview for instance)
Top comments (0)