Introduction
A few months ago, while working on a critical deployment for a client, we faced an unexpected issue: the deployment took fo...
For further actions, you may consider blocking this person and/or reporting abuse
I'd suggest one to use a somewhat bigger base image in build phase, then use alpine or slim in application phase and move the deps to it. Cuz afaik and have had experience, directly using a small image at the beginning may cause some installation of deps to fail.
Thank you for sharing your insight! You’re absolutely right—starting with a slightly larger base image in the build phase can help avoid issues with dependency installations, especially when using minimal images like Alpine. Transitioning to a smaller image (like Alpine or a slim variant) in the runtime phase is indeed a smart way to balance compatibility and optimization.
For Node.js projects specifically, using the node image directly in the build phase is a great option since it comes pre-configured for most setups. Similarly, other tech stacks might benefit from base images tailored to their needs during the build phase.
Using not alpine for dependency installation can install incompatible lib version for alpine.
Alpine use musl libc vs glibc for other base.
Another point, bumping your package.json version will invalidate the docker cache layer.
There is some technic to avoid this, like, using a temporary step setting the package.json to version 1.0 for instance before install.
Thank you for diving into these technical details—great points!
You’re absolutely correct about the musl libc vs. glibc difference. Alpine’s musl libc can indeed cause compatibility issues with some libraries, which is why choosing the right base image for dependency installation is crucial. This is also why multi-stage builds work so well—dependencies can be built in a compatible environment (glibc-based) and then moved to a smaller runtime image like Alpine if desired.
The point about package.json is spot on as well. When the version changes, it invalidates the Docker cache, which can significantly increase build times. I really like the technique you mentioned, where a temporary package.json with a static version is used during installation—it’s a clever way to maintain cache efficiency.
You said moving from
ubuntu:latest
toalpine:latest
"reduced the image size from 800MB to less than 30MB". The current version of Ubuntu (shafec8bfd95b54
) is only 78 MB, and Alpine is 7.8 MB. I think there would have had to be other changes to have such a reduction.In the forth point builtkit supports heredocs. Rather than endless
&& \
statements this would be more readable, especially for more complex runs.Good article.
Will try to implement some of these techniques at my workplace.
Thank you! Glad you found it helpful—let me know how it works out for you! 🚀
Wonderfu artical .. well, I have question.. will docker slim is reduces image vulnerabilities for open source images ?
Thank you for the kind words! 😊
As for your question: docker-slim does help in reducing image vulnerabilities to some extent. By removing unnecessary files, libraries, and binaries, it reduces the image’s attack surface. However, it’s important to note that it doesn’t directly fix known vulnerabilities in dependencies. For that, tools like Trivy or Snyk are super helpful to scan and address those issues.
Using docker-slim alongside regular vulnerability scans is a great combo for both smaller and safer images! 🚀
Very helpful my trainer didn't teach me about these he skipped whatever thanks bro
Glad you found it helpful! Sometimes trainers skip over these finer details, but exploring them yourself makes the learning even more rewarding. Happy optimizing, bro! 🚀
All these is in docker init command
Multi-stage builds are also awesome for optimizing Docker stuff: docs.docker.com/build/building/mul...
Useful. Thanks.
Article does a great job in listing the best practices while building docker images. Article is neat and precise. Keep up the great work.