In the ancient times of containers (really more like 4 years ago) Docker was the only player in the container game. That's not the case anymore though and Docker is not the only, but rather just another container engine on the landscape. Docker allows us to build, run, pull, push or inspect container images, but for each of these tasks there are other alternative tools, which might just do better job at it than Docker. So, let's explore the landscape and (just maybe) uninstall and forget about Docker altogether...
Why Not Use Docker, Though?
If you've been a docker user for long time, I think it will take some persuading for you to even consider to switch to different tooling. So, here goes:
First of all, Docker is a monolithic tool. It's a tool that tries to do everything, which generally is not the best approach. Most of the time it's better to choose a specialized tool that does just one thing, but does it really well.
If you are scared of switching to different set of tools, because you would have to learn to work with different CLI, different API or in general different concepts, then that won't be a problem. Choosing any of the tools shown in this article can be completely seamless as they all (including Docker) adhere to same specification under OCI, which is short for Open Container Initiative. This initiative contains specifications for container runtime, container distribution and container images, which covers all the features needed for working with containers.
Thanks to the OCI you can choose a set of tools that best suit your needs and at the same time you can still enjoy using the same APIs and same CLI commands as with Docker.
So, if you're open to trying out new tools, then let's compare the advantages, disadvantages and features of Docker and it's competitors to see whether it actually makes sense to even consider ditching Docker for some new shiny tool.
Container Engines
When comparing Docker with any other tool we need to break it down by its components and first thing we should talk about are container engines. Container engine is a tool that provides user interface for working with images and containers so that you don't have to mess with things like SECCOMP
rules or SELinux policies. Its job is also to pull images from remote repositories and expand them to your disk. It also seemingly runs the containers, but in reality its job is to create container manifest and directory with image layers. It then passes them to container runtime like runc
or crun
(which we will talk about little later).
There are many container engines available, but the most prominent competitor to Docker is Podman, developed by Red Hat. Unlike Docker, Podman doesn't need daemon to run and also doesn't need root privileges which has been long-standing concern with Docker. Based on the name, Podman can not only run containers, but also pods. In case you are not familiar with concept of pods, then pod is the smallest compute unit for Kubernetes. It consists of one or more containers - the main one and so-called sidecars - that perform supporting tasks. This makes it easier for Podman users to later migrate their workloads to Kubernetes. So, as a simple demonstration, this is how you would run 2 containers in a single pod:
~ $ podman pod create --name mypod
~ $ podman pod list
POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID
211eaecd307b mypod Running 2 minutes ago 1 a901868616a5
~ $ podman run -d --pod mypod nginx # First container
~ $ podman run -d --pod mypod nginx # Second container
~ $ podman ps -a --pod
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD POD NAME
3b27d9eaa35c docker.io/library/nginx:latest nginx -g daemon o... 2 seconds ago Up 1 second ago brave_ritchie 211eaecd307b mypod
d638ac011412 docker.io/library/nginx:latest nginx -g daemon o... 5 minutes ago Up 5 minutes ago cool_albattani 211eaecd307b mypod
a901868616a5 k8s.gcr.io/pause:3.2 6 minutes ago Up 5 minutes ago 211eaecd307b-infra 211eaecd307b mypod
Finally, Podman provides the exact same CLI commands as Docker so you can just do alias docker=podman
and pretend that nothing changed.
There are other container engines besides Docker and Podman, but I would consider all of them a dead-end tech or not a suitable option for local development and usage. But to have a complete picture, let's at least mention what's out there:
LXD - LXD is container manager (daemon) for LXC (Linux Containers). This tool offers ability to run system containers that provide container environment that is more similar to VMs. It sits in very narrow space and doesn't have many users, so unless you have very specific use case, then you're probably better off using Docker or Podman.
CRI-O - When you google what is cri-o, you might find it described as container engine. It really is container runtime, though. Apart from the fact that it isn't actually an engine, it also is not suitable for "normal" use. And by that I mean that it was specifically built to be used as Kubernetes runtime (CRI) and not for an end-user usage.
rkt - rkt ("rocket") is container engine developed by CoreOS. This project is mentioned here really just for completeness, because the project ended and its development was halted - and therefore it should not be used.
Building Images
With container engines there was really only one alternative to Docker. When it comes to building images though, we have many more options to choose from.
First, let me introduce Buildah. Buildah is another tool developed by Red Hat and it plays very nicely with Podman. If you already installed Podman, you might have even noticed the podman build
subcommand, which is really just Buildah in disguise, as its binary is included in Podman.
As for its features, it follows same route as Podman - it's daemonless and rootless and produces OCI compliant images, so it's guaranteed that your images will run the same way as the ones built with Docker. It's also able to build images from Dockerfile
or (more suitably named) Containerfile
which is the same thing with different name. Apart from that, Buildah also provides finer control over image layers, allowing you to commit many changes into single layer. One unexpected but (in my opinion) nice difference from Docker is that images built by Buildah are user specific, so you will be able to list only images you built yourself.
Now, considering that Buildah is already included in Podman CLI, you might be asking why even use the separate buildah
CLI? Well, the buildah
CLI is superset of commands included in podman build
, so you might not need to ever touch the buildah
CLI, but by using it you might also discover some extra useful features (For specifics about differences between podman build
and buildah
see following article).
With that said, let's see a little demonstration:
~ $ buildah bud -f Dockerfile .
~ $ buildah from alpine:latest # Create starting container - equivalent to "FROM alpine:latest"
Getting image source signatures
Copying blob df20fa9351a1 done
Copying config a24bb40132 done
Writing manifest to image destination
Storing signatures
alpine-working-container # Name of the temporary container
~ $ buildah run alpine-working-container -- apk add --update --no-cache python3 # equivalent to "RUN apk add --update --no-cache python3"
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
...
~ $ buildah commit alpine-working-container my-final-image # Create final image
Getting image source signatures
Copying blob 50644c29ef5a skipped: already exists
Copying blob 362b9ae56246 done
Copying config 1ff90ec2e2 done
Writing manifest to image destination
Storing signatures
1ff90ec2e26e7c0a6b45b2c62901956d0eda138fa6093d8cbb29a88f6b95124c
~ # buildah images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/my-final-image latest 1ff90ec2e26e 22 seconds ago 51.4 MB
From the above script you can see that you can build images simply using buildah bud
, where bud
stands for build using Dockerfile, but you can also use more scripted approach using Buildahs from
, run
and copy
, which are equivalent commands to the commands in Dockerfile (FROM image
, RUN ...
, COPY ...
).
Next up is Google's Kaniko. Kaniko also builds container images from Dockerfile and similarly to Buildah, it also doesn't need a daemon. The major difference from Buildah is that Kaniko is more focused on building images in Kubernetes.
Kaniko is meant to be run as an image, using gcr.io/kaniko-project/executor
, which makes sense for Kubernetes, but isn't very convenient for local builds and kind of defeats the purpose as you would need to use Docker to run Kaniko image to build your images. That being said, if you are looking for tool for building images in your Kubernetes cluster (e.g. in CI/CD pipeline), then Kaniko might be a good option, considering that it's daemonless and (maybe) more secure.
From my personal experience though - I used both Kaniko and Buildah to build images in Kubernetes/OpenShift clusters and I think both will do the job just fine, but with Kaniko I've seen some random build crashes and fails when pushing images to registry.
The third contender here is buildkit, which could be also called the next-generation docker build
. It's part of Moby project (as is Docker) and can be enabled with Docker as an experimental feature using DOCKER_BUILDKIT=1 docker build ...
. Well, but what exactly will this bring to you? It introduces bunch of improvements and cool features including parallel build steps, skipping unused stages, better incremental builds and rootless builds. On the other hand however, it still requires daemon to run (buildkitd
). So, if you don't want to get rid of Docker, but want some new features and nice improvements, then using buildkit might be the way to go.
As in the previous section, here we also have a few "honorable mentions" which fill some very specific use cases and wouldn't be one of my top choices:
Source-To-Image (S2I) is a toolkit for building images directly from source code without Dockerfile. This tool works well for simple, expected scenarios and workflows but quickly becomes annoying and clumsy if you need little too much customization or if your project doesn't have the expected layout. You might consider using S2I if you are not very confident with Docker yet or if you build your images on OpenShift cluster, as builds with S2I are a built-in feature.
Jib is another tool by Google, specifically for building Java images. It includes Maven and Gradle plugins, which can make it easy for you to build images without messing with Dockerfiles.
Last but not least is Bazel, which is anoooother tool by Google. This one is not just for building container images, but rather a complete build system. If you just want to build an image, then diving into Bazel might be a bit of an overkill, but definitely a good learning experience, so if you're up for it, then rules_docker section is a good starting point for you.
Container Runtime
Last big piece of a puzzle is container runtime which is responsible for, well, running containers. Container runtime is one part of the whole container lifecycle/stack, which you will most likely not going to mess with, unless you have some very specific requirement for speed, security, etc. So, if you're tired of me already, then you might want skip this one section. If on the other hand, you just want to know what are the options, then here goes:
runc is the most popular container runtime created based on OCI container runtime specification. It's used by Docker (through containerd), Podman and CRI-O, so pretty much everything expect for LXD (which uses LXC). There's not much else I can add. It's default for (almost) everything, so even if you ditch Docker after reading this article, you will most likely still use runc.
One alternative to runc is similarly (and confusingly) named crun. This is tool developed by Red Hat and fully written in C (runc is written in Go). This makes it much faster and more memory efficient than runc. Considering that it's also OCI compliant runtime, you should be able switch to it easily enough, if you want to check for yourself. Even though it's not very popular right now, it will be in tech preview as an alternative OCI runtime as of the RHEL 8.3 release and considering that it's Red Hat product we might eventually see as default for Podman or CRI-O.
Speaking of CRI-O. Earlier I said that CRI-O isn't really a container engine, but rather container runtime. That's because CRI-O doesn't include features like pushing images, which is what you would expect from container engine. CRI-O as a runtime uses runc internally to run containers. This runtime is not the one you should try using on your machine, as it's built to be used as runtime on Kubernetes nodes and you can see it described as "all the runtime Kubernetes needs and nothing more". So, unless you are setting up Kubernetes cluster (or OpenShift cluster - CRI-O is default there already), then you probably should not touch this one.
Last one for this section is containerd, which is a CNCF graduating project. It's a daemon that acts as an API facade for various container runtimes and OS. In the background it relies on runc and it's the default runtime for Docker engine. It's also used by Google Kubernetes Engine (GKE) and IBM Kubernetes Service (IKS). It's an implementation of Kubernetes Container Runtime Interface (same as CRI-O), therefore it's a good candidate for runtime of your Kubernetes cluster.
Image Inspection and Distribution
Last part of container stack is image inspection and distribution. This effectively replaces docker inspect
and also (optionally) adds ability to copy/mirror images between remote registries.
The only tool which I will mention here that can do these tasks is Skopeo. It's made by Red Hat and it's an accompanying tool for Buildah, Podman and CRI-O. Apart from the basic skopeo inspect
which we all know from Docker, Skopeo is also able to copy images using skopeo copy
which allows you to mirror images between remote registries without first pulling them to local registry. This feature can also act as pull/push if you use local registry.
As a little bonus, I want to also mention Dive, which is a tool for inspecting, exploring and analyzing images. It's little more user friendly, provides more readable output and can dig (or dive, I guess) a bit deeper into your image and analyze and measure its efficiency. It's also suitable for use in CI pipelines, where it can measure whether your image is "efficient enough" or in other words - whether it wastes too much space or not.
Conclusion
This article wasn't meant to persuade you to completely ditch Docker, rather its goal was to show you the whole landscape and all the options for building, running, managing and distributing containers and their images. Each of these tools including Docker, has its pros and cons and it's important to evaluate what set of tools suits your workflow and use case the best and I hope this article will help you with that.
Top comments (53)
Docker is very similar to
npm
, as it's basically a package-manager, and package-hosting-platform.Monolithic systems seem complex and annoying, but they usually win out as they have an integrated and smooth workflow for developers...
I think
npm
andDocker
are great tools and aren't going away any time soon...I have so many questions about first two paragraphs... 😕
Happy to answer any questions, was it unclear?
A package manager or package-management system is a collection of software tools that automates the process of installing, upgrading, configuring, and removing computer programs for a computer's operating system in a consistent manner.[1]
A package manager deals with packages, distributions of software and data in archive files. Packages contain metadata, such as the software's name, description of its purpose, version number, vendor, checksum (preferably a cryptographic hash function), and a list of dependencies necessary for the software to run properly.
Dependencies... install in OS... hmmmm
Containerd is probably not going anywhere, but if docker isn't able to find a way to turn a profit I don't think they'll be staying around...
how does npm profit?
They don't afaik. They got acquired by microsoft, before that they were using investor money to keep the lights up.
Npm have enterprise plans (private registries and other tools), but I'm not sure if they actually make a profit
NPM has not only enterprise plan also personal plans.
npmjs.com/products
The fact they have paid plans does not mean they're making a profit. I've never worked at a company which pays for their products - the offering is not worthwhile since you can just use ssh packages.
company, i had worked at, has been subscribing private plan. they don't want sharing their technique and skills.
I don't understand... if these tools are literally drop-in replacements for docker to the point where you can alias them on the CLI to docker... how is this signaling the end of docker? Especially since these replacements lack the foundational tooling, like 'compose' that actually make docker worth doing in the first place... I am not only unconvinced but not sure what you're even getting at.
The main thing that was missing was docker-compose. I honestly rarely, if ever, use docker without a compose file. It's what is used to run a stack and link containers together, define networks, volumes etc.
UPDATE: I did run into this project it might be of use to others: github.com/containers/podman-compose podman's implementation of docker-compose.
Everything in this article provides alternatives but I really don't see any of them proving the point the article is trying to convey.
Sure, you can use a different engine. Podman would be cool to not require root. Doing sidecar patterns is very useful, but if you don't have a compose equivalent there's little appeal to it either.
Alternatives that build a docker image? Sure but why when docker does that already? I mean if your ecosystem is in language X and there are tools that integrate better in that ecosystem you may benefit from using different tools, but for the most part, I think i'll stick to docker and their build files.
Splitting the ecosystem so much just makes it IMO overly complicated for no reason. I remember trying to use hbase/hadoop ages ago and their components were so dependent on one another and so overly complicated that they started shipping 'distros' where they had all of their stack bundled together.
I would never use docker if it wasn't a single install that does all of these things and if I had to figure out how to build images, how to pick my container engine and so on. I appreciate having the choices but I'll stick to docker personally.
Till some alternative comes out that is a true drop in replacement for docker which includes (API, engine, build, runtime AND service orchestration ie. docker-compose) none of these seem like legitimate contenders.
Also.. the BIGGEST win is that it's Simple and 'just works' across every ecosystem, linux, windows and mac. There are several containers alternative but many of them are Linux only which is basically worthless for the problem I'm trying to solve from Dev workflow to Production.
Why not use Terraform rather than Docker Compose?
There are multiple use cases for docker. Keep in mind that the developer path is just as important. I would never expect a developer to run terraform to setup their dev environment. That seems a bit overkill.
thanks for this. haven't had the need for containers yet in my work. but I'll surely skip docker and try out podman. 🙏🙏🙏
I didn't see a strong case against Docker here. Just that it can do a lot and requires root privileges. Is that really a reason to abandon the industry standard and go with a lesser-known alternative? What is your main reason you think podman is better just from reading the article?
I don't really pay much attention to what the herd is saying (industry standards). I try things out and if they work good enough for my purpose with good enough stability, security, performance and ease of use I use them for my projects.
some stuff I've chosen over the herd recommendations would be: svelte over vue/react, .net core over node, mongodb over postgres/mysql, vertical slice architecture over layered, servicestack over web api, monoliths with good caching over microservices, etc. and I'm extremely happy with my choices.
personally I like bare metal deployments compared to containers because currently my ci/cd pipelines takes care of pushing my builds out to my servers without much hassle. in the future when theres a need to manage clusters with hundreds of nodes, I'll start using k8s or something. needing root privileges is a huge no-no for docker in my book. so I'll be looking for alternatives.
The ’herd’ = industry standards? We have and need industry standards for a reason. The term ’herd’ has negative connotations and is not an effective or positive term to describe standardization, IMO.
okaaay, let me try and rephrase then... "the widely accepted popular choices/ beliefs/ patterns, etc." basically what i'm trying to say is: question and evaluate everything for yourself. don't just blindly follow what the masses are doing. i believe that's the herd mentality, yes? i have no problems with the industry coming up with standards so that everybody's on the same page. hope i've explained my intentions clearly.
I think one of the main parts here is the word standard. Docker is not the standard, standard is OCI. Docker complies with OCI, Podman complies with OCI. Both are just some of the tools that implement the current container standards. And the main problem, in my opinion, is that we currently view one particular technology as a standard. It's similar if instead of HTTP requests we would be talking about "Focus-Pocus requests", simply because "Focus-Pocus" would be the very popular tool that implements the HTTP standard.
yes 👍
The reason most experienced devs will ONLY go with things that are widely used, is because we've lived long enough to get burned by using some less popular tool or framework where if you try to google some specific problem or issue you find "zero search results".
If you go with the standards all the bugs will be worked out, you'll interoperate with the rest of the world better, you'll find much more resources, and others will want to join you in whatever you're doing. If you go with the oddball framework, you'll have more trouble, less quality, less support, less interoperability, higher maintenance costs, as a 'general rule'.
Just like most products (from guns to cars) if you buy the oddball product, you're just asking for difficulties that otherwise are easy to avoid.
Like any rule, there are exceptions. What I just stated is all "on average", "rule of thumb" type of advice. Take it or leave it. :)
thank you for your opinion ♥️🙏
It's an important point about lack of support, community, and Google/Stack Exchange results. A long time ago, I worked in a department where we used Borland (later Embarcadero) C++ Builder and Ingres database, and an ancient version of RedHat Linux on the backend. There were many problems that we had to overcome that existed because of that particular combination of technologies, for which we could find no outside help because I don't think anyone else used that combination of technologies. Whilst trying to figure out how to deal with one of these problems one day, I wondered out loud: "I wonder how many people in the world use C++ Builder, Ingres and RedHat together?". I began to count the number of people in the office: "1, 2, 3, 4..." and everyone laughed.
I've been coding 30 years (in my 50s) and used Borland C++ a lot too. Your example sounds familiar to a lot of what I've seen many times. I've seen junior developers download random libraries from the old sourceforge and put it directly into a commercial product with no permission asked for, no discussions had, etc. And it was a name-brand company you'd know.
I think the fact that docker requires root is a big enough concern to switch. It does have experimental support for rootless but its very limited and has performance issues (looking at you, vfs). Podman provides a solution to precisely what has been my biggest gripe about docker.
another gripe I have with docker is the loss of performance. when I benchmarked a docker container on my local dev machine the RPS for a REST app dropped ~30% compared to running it bare metal. maybe I did it wrong. but I'd like to stay away from containers for as long as I humanly can. I feel all the extra work I need to do managing docker is not worth my while (yet).
There are several factors that could be at play. If you aren't running docker natively on Linux that will have some overhead (virtualization, proxying, sync files in bind mount to vm, etc). If you forgot to add a volume somewhere and there's some IO-heavy operations being done inside the container there will be some overhead. If you're on a RHEL-based distro and didn't configure devicemapper to use a proper thin pool there will be some overhead. Et cetera.
If you get things right though there shouldn't be any measurable performance difference with running containers. Also worth noting that orchestrators such as Kubernetes have their own overhead.
yeah I've been reading about those. maybe I'll have another crack at it with podman sometime soon. thanks for the input 🙏🙏🙏
I'd also note that rootless is going to have a greater overhead since there's a couple of extra things which need to run in userspace (e.g., slirp4netns). It isn't something specific to podman though.
yeah nothing is perfect in our world and I guess I'll have to evaluate the cost/benefits of using or not using containers. hopefully it's many months or years in the future for me 😜
runc is an OCI industrial standard, podman and libpod are implementations of OCI standard. It is prudent to know about these alternatives as Industry is moving to such open source alternatives.
One of the most complete and helpful article on alternatives to docker. I have been using docker for some time now and I do not like it and happy to see so many alternatives.
You didn't really provided us with any pros/cons for the other tools in compare to Docker.
Your article mentions that for each feature that Docker has there is an alternative (engine/image/runtime) but you didn't really provided us with any information on why we should use any of the other tools (except for the root privileges that Docker requires).
This articles feels a bit like a "fanboy wars" like "IPhone is better than Android", "XBox is better than Playstation"...
My biggest issue with podman has been the lack of decent replacement for compose. Compose is able to automatically build images and re-create containers if the configuration was changed for example. The podman-compose project has quite a few issues last time I tried it.
Also checkout the Dockerless series here (including deep dive into runc): mkdev.me/en/posts/dockerless-part-... and some of the ways to live without Docker in production: youtube.com/watch?v=aViKsSEGwOc&li...
Very nice article👏👏👏. I want to share two articles of June. Hope this help someone.
Why hackers 'first love' a docker container? Hacking Docker
manish srivastava ・ Jun 4 ・ 21 min read
TIME TO SAY BYE BYE DOCKER !!! Era of Docker is over...
manish srivastava ・ May 28 ・ 6 min read
This is a very nice article. I'm convinced there is going to be more than one way to build the exact same dockerfile.
I wish I knew how to exactly pick one image build tool. IMHO the image that is the least chubby possibly wins. Just kidding. What I'm trying to say that more than one way is going to confuse both junior and senior engineers. Why? Selecting a build tool is going to come down to opinionated architectures unless we all start to see selection the same way.
I'm starting to use rootless Podman and managed to deploy a simple Kafka Pub/Sub example if you want to take a look.
The only issue I'm having with rootless Podman is the inter-pod communication. Currently I have to create a bridged network to make pods able to communicate with each other.
How about Cloud Native Buildpacks?
buildpacks.io/
I used buildpacks in my project and while it takes away some control from the developers, it does liberate them from understanding how to write good Dockerfiles.