DEV Community

janosmurai for axem

Posted on • Updated on • Originally published at axemsolutions.io

Tool Containerization Best Practices For Embedded Software Development

Containerization of development tools in the embedded software industry is still emerging but gaining traction rapidly. More teams are beginning to use containers to standardize and streamline their development environments, improving reproducibility and collaboration. However, the industry still faces challenges in fully integrating containerized workflows.

In this blog post, we will explore the process of containerizing a development environment for embedded software development, using the STM32F1 target as a basic example. Instead of relying on the vendor's integrated IDE, we will assemble the necessary development tools ourselves. This approach allows us to encapsulate the tools and their dependencies in a container image for improved consistency and portability.

We've already discussed the advantages of containerization, but now let's delve into the practical steps to create a containerized development environment. Our example toolkit includes:

  • Build system: GNU Make
  • Toolchain: Arm GNU Toolchain (the official compiler toolchain from Arm)
  • Debugger and deployer: stlink-org (an open-source implementation of ST's STLINK Tools)
  • Test environment: CppUTest (a C/C++ based test framework)

A container image is essentially a file with executable code that can create a container on a computing system. The main objective of this tutorial is to use containerization to isolate the development tools and their dependencies. To achieve this, we will leverage Docker as our container engine, so we will need to craft a Dockerfile. (See the official documentation to install Docker on your system.)

Before creating the container image, the initial decision revolves around selecting an appropriate base image. For development containers, a Debian-based image is generally sufficient. This base image is relatively compact, stable, and comes with many of the dependencies pre-installed.

The next crucial step is to gather all the necessary tools and their dependencies. Here's a breakdown of how to obtain each component:

  • Make and stlink-org: You can easily install these tools from the Debian repository using the apt package manager.
  • Gnu-arm-none-eabi: To enable debugging for the toolchain, you must install the gdb package via the apt package manager. Additionally, you'll need the wget and bzip2 packages to download and install the toolchain's binary, which is available from Arm's official file server.
  • CppUTest: CppUTest is installed from source, and you can obtain the source files from the project's GitHub repository using git. To build the source files successfully, you'll need the following packages: g++, cmake, libtool, and autoconf.

To ensure that you've gathered all the necessary dependencies, a helpful tip is to attempt to install each tool to a container created from the chosen base image. For example, if you're using a Debian base image, you can run the following command:
docker run \-it debian:bullseye /bin/bash
This should open up a shell in a Debian based container. In this shell try to run the commands you’d like to add to your dockerfile. This approach is much more efficient than troubleshooting the Dockerfile with rebuilding every iteration.

Creating a monocontainer

Now that we know how to obtain our development tools, it's time to dive into containerization. For this example, We've prepared a Dockerfile to encapsulate the toolset and dependencies within a monocontainer.

# Use the Debian base image as our starting point.
FROM debian:bullseye

# Install the required packages.
RUN apt update -y && \
    apt -y install g++=4:10.2.1-1 \
                   cmake=3.18.4-2+deb11u1 \
                   libtool=2.4.6-15 \
                   autoconf=2.69-14 \
                   git=1:2.30.2-1+deb11u2 \
                   gdb=10.1-1.7 \
                   wget=1.21-1+deb11u1 \
                   bzip2=1.0.8-4 \
                   make=4.3-4.1 \
                   stlink-tools=1.6.1+ds-3

# Clone and install CppUTest
RUN git clone https://github.com/cpputest/cpputest
WORKDIR /cpputest
RUN autoreconf . -i && \
    ./configure && \
    make tdd
ENV CPPUTEST_HOME=/cpputest

# Set the working directory for your project
WORKDIR /work

# Download and set up the GNU Arm toolchain
RUN wget https://developer.arm.com/-/media/Files/downloads/gnu-rm/10.3-2021.10/gcc-arm-none-eabi-10.3-2021.10-x86_64-linux.tar.bz2 && \
    tar -xjf gcc-arm-none-eabi-10.3-2021.10-x86_64-linux.tar.bz2 && \
    rm gcc-arm-none-eabi-10.3-2021.10-x86_64-linux.tar.bz2 && \
    mv gcc-arm-none-eabi-10.3-2021.10 /opt/gcc-arm

ENV PATH="/opt/gcc-arm/bin:${PATH}"
Enter fullscreen mode Exit fullscreen mode

Here's a breakdown of the Dockerfile:

  1. We start with a Debian base image.
  2. We update the package list and install the required software dependencies directly from the Debian repository using the apt package manager.
  3. We clone the CppUTest repository from GitHub, build it in the /cpputest directory, and set the CPPUTEST_HOME environment variable to the installation path.
  4. We configure the working directory as /work.
  5. The GNU Arm toolchain is downloaded, unzipped, and placed in the /opt/gcc-arm directory. We also update the PATH environment variable to ensure the toolchain is accessible system-wide.

To create the container image, run the following command:

docker build \-t dev\_env\_image .

On a PC Intel i7-8550U (8) @ 4.000GHz, this process took approximately 5 minutes and 22 seconds.

With the containerized development environment ready, you can experiment with it using a simple demo repository. Clone the repository with:

git clone https://github.com/axem-solutions/example

Navigate to the root of the example directory.

cd example

Run the container.

docker run --privileged --rm -it -v "$(pwd)":/work dev_env_image:latest bash

Here’s a breakdown of the command:

  • —privileged: Give access to the USB devices. This is not a secure solution, only used for the sake of simplicity.
  • —rm: Remove the container after stopping it.
  • -v “$(pwd)":/work: Mount the current working directory to /work.
  • dev_env_image:latest: Use the image we have just built.
  • bash: Run bash inside the container.

Inside the container, you can use the tools as you would natively.

  • Build the project:

make

  • Run the test cases:

cd /work/app/test
make

  • Deploy to the target:

cd /work/build
st-flash write tutorial.bin 0x8000000

Another option is to start the tasks alongside the container. When the task finishes the container stops.

  • Build the project:

docker run \--rm \-v "$(pwd)":/work dev\_env\_image:latest make

  • Run the test cases:

docker run \--rm \-v "$(pwd)":/work dev\_env\_image:latest /bin/sh \-c "cd app/test; make"

  • Deploy to the target:

docker run \--privileged \--rm \-v "$(pwd)":/work dev\_env\_image:latest /bin/sh \-c "cd build; st-flash write tutorial.bin 0x8000000"

Problems with the monocontainer approach

While placing each tool in a single container may seem like a straightforward approach, it can lead to several challenges reminiscent of issues faced with traditional Integrated Development Environments (IDEs). In this chapter, we will discuss the downsides of the monocontainer approach and why it may not always be the ideal solution.

Scalability and Maintainability: The monocontainer approach may work well for simple projects, as demonstrated in our example. However, in real-world scenarios, software development often involves a multitude of tools. As more tools are added, the Dockerfiles can become excessively lengthy, potentially stretching to hundreds of lines. This can lead to a lack of scalability and make the environment hard to maintain.

Difficult bug localization: The containerized tools may share resources and have interdependencies. Making modifications to the image can inadvertently result in complex and challenging-to-detect malfunctions. This can make the process of pinpointing the root cause of a problem even more difficult.

Time-Consuming Modifications: Docker builds are structured with a series of ordered build instructions defined by the Dockerfile. Each instruction roughly translates to an image layer. When building an image, Docker attempts to reuse layers from previous builds. However, if a layer has changed since the last build, that layer and all subsequent layers must be rebuilt. Meaning, the lower layer gets modified, the more time it takes for the image to build. Consequently, maintaining large Dockerfiles can quickly become a time-consuming task.

Image Variants: Throughout the development lifecycle, there may be a need for different image variants. For example, when setting up a Continuous Integration/Continuous Deployment (CI/CD) server, some tools may become unnecessary (e.g., the debugger), while new tools (e.g., the CI/CD service) are required to run CI/CD pipelines. Alternatively, changes in the project may necessitate the creation of a new development environment while retaining the old one for compatibility reasons.

As mentioned earlier, Docker images are composed of layers. These layers can be shared among images until the point where the first difference occurs. This means that layers of image variants, starting from the first one that differs from the original, consume additional space on the host storage.

Working on Multiple Projects: In scenarios where developers are simultaneously working on several projects, each project may have its own dedicated image. However, some tools used across these projects may be the same. Just like with image variants, Dockerfiles should be thoughtfully constructed to minimize storage consumption. Once the first differing layer is encountered, subsequent layers are not shared, even if they are identical.

Size constraints: Huge Dockerfiles result in huge container images, which can easily end up hundreds of gigabytes. Developers working on multiple projects can quickly run out of available local storage. The size of the container images can also become problematic when they get pulled over metered connections, especially on CI/CD providers where this can happen quite often.

In the following chapters, we'll explore potential solutions to these challenges and how to leverage dedicated tool images for a more efficient and scalable development environment.

Solution: Dedicated Tool Images

Scalability and Maintenance: Placing each separate tool into its own container resolves the scalability and maintenance issues. With separate containers for each individual tool, the development environment becomes highly modular, making it easy to add or remove tools as needed without affecting the entire setup. This modularity simplifies the development environment's management.

Troubleshooting: Troubleshooting is greatly simplified with dedicated tool images. When an issue arises, you only need to inspect the Dockerfile of the specific malfunctioning tool. This pinpointed approach reduces the complexity of debugging and minimizes the potential for conflicts between tools.

Efficient Build Process: By using separate tool images, you significantly reduce the number of image layers. This means that rebuilding images, even if lower layers were modified, is much faster. The build process becomes more efficient, making it easier to maintain large Dockerfiles and keep development environments up-to-date.

Adaptation and Variants: Changing tools or creating new image variants becomes straightforward. You can swap out a tool image for a new one without affecting other tools. This flexibility allows for the quick creation of new development environments without multiplying storage consumption.

Storage Efficiency: Dedicated tool images optimize storage usage. Only the layers that differ from the original image occupy extra space on the host. This minimizes the storage footprint and is especially useful when working on multiple projects or using variants of the same image.

Overall, this method of utilizing dedicated tool images effectively resolves the limitations and challenges associated with the monocontainer approach. It provides a modular, efficient, and scalable way to manage development environments, making it easier to adapt to changing requirements while simplifying troubleshooting and minimizing storage consumption.

Separating the Tools for the Example Project

In this chapter, we'll revisit the example project and explore how to separate the tools effectively. Understanding the communication between the tools is crucial to decide how they can be distributed into individual containers.

Dev Env relations

CppUTest

The CppUTest and the GNU Arm toolchain don't rely on each other. They operate on source files independently, making them suitable for separation. The Make tool directly calls CppUTest, so it must be present in the same image. Make is installed via the cmake package. Below is the Dockerfile for CppUTest:

FROM debian:bullseye

# Install the required packages.
RUN apt update -y && \
    apt -y install g++=4:10.2.1-1 \
                   cmake=3.18.4-2+deb11u1 \
                   libtool=2.4.6-15 \
                   autoconf=2.69-14 \
                   git=1:2.30.2-1+deb11u2

RUN git clone https://github.com/cpputest/cpputest

WORKDIR /cpputest

RUN autoreconf . -i && \
    ./configure && \
    make tdd

ENV CPPUTEST_HOME=/cpputest

WORKDIR /work
Enter fullscreen mode Exit fullscreen mode

Build command:

docker build \-t axemsolutions/cpputest .

Build time: 2m 24s

GNU Arm Toolchain

The Make tool directly calls the GNU Arm toolchain, so it must reside in the same image. Here's the Dockerfile:

FROM debian:bullseye

# Install the required packages.
RUN apt update -y && \
    apt -y install gdb=10.1-1.7 \
                   wget=1.21-1+deb11u1 \
                   bzip2=1.0.8-4 \
                   make=4.3-4.1

# Installing the gnu-arm-none-eabi toolchain
WORKDIR /work

RUN wget https://developer.arm.com/-/media/Files/downloads/gnu-rm/10.3-2021.10/gcc-arm-none-eabi-10.3-2021.10-x86_64-linux.tar.bz2 && \
    tar -xjf gcc-arm-none-eabi-10.3-2021.10-x86_64-linux.tar.bz2 && \
    rm gcc-arm-none-eabi-10.3-2021.10-x86_64-linux.tar.bz2 && \
    mv gcc-arm-none-eabi-10.3-2021.10 /opt/gcc-arm

ENV PATH="/opt/gcc-arm/bin:${PATH}"
Enter fullscreen mode Exit fullscreen mode

Build command:

docker build \-t axemsolutions/make\_gnu-arm .

Build time: 2m 14s

Stlink-org

Stlink-org communicates with the GNU Arm toolchain (GDB client) over TCP/IP. This network connection can be established between separated containers, allowing Stlink-org to be placed in a separate container. Here's the Dockerfile for Stlink-org:

FROM debian:bullseye

# Install the required packages.
RUN apt update -y && \
    apt -y install stlink-tools=1.6.1+ds-3

WORKDIR /work
Enter fullscreen mode Exit fullscreen mode

Build command:

docker build \-t axemsolutions/stlink-org .

Build time: 7s

The separated tool images significantly reduce build times, saving valuable time when frequent rebuilds are necessary.

While the usage of these images is not detailed in this blog post, you can find comprehensive instructions in our Tutorial. This tutorial covers setting up and compiling a project for the NUCLEO-F103RB, obtaining tool images from axem’s Docker Hub, and flashing and debugging the application on the target using VS Code and its Dev Containers extension.

The DEM Solution

While separating tools into their individual containers offers numerous advantages, it can lead to a proliferation of container images, which can quickly become unmanageable. It becomes challenging to keep track of which images are needed for the different projects. To address this issue, we introduced a tool called DEM (Development Environment Manager). DEM enables the creation of development environments from tool images and allows them to be assigned to specific projects.

To learn more about DEM and its capabilities, please visit the project's GitHub repository and explore its detailed documentation.

The axem Open Tool Dockerfiles repo

The approach of creating dedicated containers not only simplifies the management of development environments but also encourages the reuse of tool images. At axem, our team is dedicated to developing tool images that can serve as fundamental building blocks for constructing new development environments. That's why we've made our repository of Dockerfiles open to the community.

Open Tool Dockerfiles (OTD) is an open and collaborative repo for storing and sharing Dockerfiles, specifically tailored for embedded software development tools. Our mission is to create a community-driven repository where developers can freely contribute, access, and utilize containerized build systems, debuggers, toolchains and more. We welcome contributions from everyone, so feel free to add new Dockerfiles to the repository.

The generated tool images will be easily accessible from our free and open-source registry called axem Open Registry (aOR). This registry is designed to make the process of building development environments even more efficient, giving you the tools you need to succeed in your projects.

Conclusion: A Containerized Future for Embedded Software Development

In the realm of embedded software development, adapting to the growing complexities of projects is a key challenge. Monocontainers, once appealing, can become unwieldy and inefficient. Troubleshooting, adapting to new requirements, and working on multiple projects all pose challenges.

The solution lies in dedicated tool images. By separating tools into isolated containers, we address scalability, maintenance, and troubleshooting issues. The Development Environment Manager (DEM) streamlines environment management, while the axem Open Registry (aOR) offers a central repository for tool images.

This transition from monocontainers to modular tool images, coupled with DEM and aOR, marks a significant leap in efficient embedded software development. It allows for increased agility, time savings, and seamless project work. Embracing containerization is the future, enabling developers to excel in the ever-evolving field of embedded software.

Top comments (2)

Collapse
 
martinbaun profile image
Martin Baun

Great piece! I enjoy the fact that I don't have to install apache, mysql, mongodb, nginx, oracle, etc on my laptop to work different projects.

Collapse
 
janosmurai profile image
janosmurai

Thanks Martin!