DEV Community

Cover image for Process Signals Inside Docker Containers
Maxim Orlov
Maxim Orlov

Posted on • Edited on • Originally published at maximorlov.com

Process Signals Inside Docker Containers

This article was originally published a day earlier at https://maximorlov.com/process-signals-inside-docker-containers/

Recently, I had a strange issue with my Node.js application running inside a Docker container — it wasn’t shutting down gracefully.

But when I ran the app outside a container, the issue was gone.

Why did it behave unexpectedly inside a container?

I added logs to the SIGTERM handler and tried again. No logs. Tried other signals and… nothing.

For some reason, process signals were not going all the way through to the application.

I also noticed the container took a little while before it stopped. Docker must’ve instructed the application to shut down. After a grace period, when it saw it didn’t, Docker forcefully killed my app.

I set out to solve this mystery and find out exactly what was happening behind the scenes. I wanted to get rid of the unexpected behaviour and have my application shut down gracefully in production environments.

So I started doing some research.

One article led to another, and before I knew it, I was reading about the Linux kernel, zombies and orphans.

I’m not kidding.

If you want to know what the three have in common, keep reading.

By the end of this article, you will learn:

  • The difference between the exec and shell forms of the CMD instruction
  • Why executing a containerised application with npm start is not a good idea
  • How the Linux kernel treats process with PID 1 in a unique way
  • The role of process managers
  • How to execute your Node.js application inside Docker containers

Knowledge assumption
To be able to follow along, you should have some basic knowledge of Docker. You should know how to build a Docker image and how to run a container.

The issue explained

Without containers, stopping an application is straightforward. You grab the process ID and run kill <pid>. That will send a SIGTERM signal to your app and allow it to shut down gracefully.

When your application runs in a container, you can’t directly send signals from outside because it’s in an isolated namespace. You have to use Docker CLI to shut down your application and stop the container.

When you stop a container, Docker sends a SIGTERM signal to the process with PID 1. After a timeout period, if your application doesn’t shut down gracefully, Docker will forcefully terminate it with a SIGKILL signal. This signal goes directly to the Linux kernel, and your app cannot detect or handle it. SIGKILL is a last resort measure to close an application, and we all know that’s a pretty harsh thing to do.

If your application is not PID 1, or if the process running as PID 1 doesn’t forward signals to your app, it won’t know when to shut down gracefully. It’s easy to end up in this situation when you’re working with containers if you don’t know what’s going on.

The exec and shell forms of CMD

The CMD instruction has two forms: exec and shell.

Exec form (recommended)

CMD ["node", "index.js"]
Enter fullscreen mode Exit fullscreen mode

When you run an image that uses the exec form, Docker will run the command as is, without a wrapper process. Your Node.js application will be the first and only running process with PID 1.

Shell form

CMD node index.js
Enter fullscreen mode Exit fullscreen mode

With the shell form, Docker will invoke a command shell before starting your application. It will do so with /bin/sh -c prepended to the command. Therefore, the exec form equivalent of this is:

CMD ["/bin/sh", "-c", "node index.js"]
Enter fullscreen mode Exit fullscreen mode

Shell will take up PID 1, and your Node.js application will be its child process. There are now two processes running in the container.

Shell doesn’t relay process signals to its children. Therefore, your application will be unaware of any SIGTERM and SIGINT signals sent to the container from outside. You also don’t want shell to be the parent of your Node.js application when you have the Node.js runtime and can run your app standalone.

NPM is not a process manager

So now you’re a good citizen, and you’re using the exec form of CMD in your Dockerfile. You might have thought about doing the following:

CMD [“npm”, “start”]
Enter fullscreen mode Exit fullscreen mode

Surely this can’t be wrong? Using npm start is a standardised way of starting a Node.js app. Projects specify their entry point in package.json, and whoever clones the repository doesn’t have to poke inside and figure out whether the main file is index.js, app.js, server.js, or main.js.

In the containerisation world, this is no longer relevant. Docker images adhere to a standard that defines the same structure for all images, regardless of the application they host inside. It’s the responsibility of the image creator to specify how the application inside that image should start. This is what the CMD instruction is for, and how Docker knows how to handle images.

NPM is also not a process manager, and it won’t pass any signals to your application. Therefore, npm start doesn’t belong in a Dockerfile.

Moreover, the above CMD instruction results in a container with, not 2, but 3 (!) running processes.

PID 1 has a special status

Your application is PID 1 and is shutting down gracefully. Great, we’re done! Or are we? Everybody knows that with great freedomPID 1, comes great responsibility. Let me explain.

Traditionally, in a non-containerised environment, during boot time of an operating system, the Linux kernel starts an init process and assigns it Process ID 1. Init is a process manager that’s responsible for, amongst others, the removal of zombie orphaned processes. (Yes, that’s a technical term. Who comes up with these names?!)

A zombie process is a process that has stopped and is waiting to be removed from the kernel process table by its parent. A process is labelled as an orphan after its parent terminates. Therefore, a zombie orphaned process is a stopped process that has lost its initial parent.

When the Linux kernel sees an orphaned process, it assigns PID 1 as the parent. This process is now responsible for cleaning up the adopted child process after it exits. That’s the responsibility of a process manager and not something you want to do in your application.

The Linux kernel also protects the PID 1 process from signals that would otherwise kill other processes. Unless you explicitly handle SIGTERM in your code, your application won’t quit when it’s running as PID 1.

Why don’t we want to have zombie processes lying around? Because they take up a slot in the kernel process table. When the table is full, the kernel won’t be able to spawn new processes.

Having said that, the kernel process table is pretty big. To find out how many slots it has, you can run this command inside your Linux container: cat /proc/sys/kernel/pid_max. A full process table isn’t likely to happen in a containerised environment with a single application.

If your application is running as PID 1 and is configured to handle SIGTERM, then it’s probably fine. As long as you understand the intricacies around PID 1.

A Tini process manager

If we don’t want to run our application as PID 1, and Docker sends all signals to PID 1, how do we make sure our application knows when to shut down gracefully?

That’s where Tini comes into the picture. Tini is a slim process manager designed to run as PID 1 inside containers. It will forward signals to your application and will clean up zombie processes. It does that transparently, so you don’t have to make any changes to your application.

In recent versions, Docker added Tini to its CLI, and you can enable it with the --init flag when you start a container:

docker run --init my-image
Enter fullscreen mode Exit fullscreen mode

Alternatively, you can add Tini to your image and define it as the ENTRYPOINT in your Dockerfile. Refer to the using Tini section in the repository README on how to accomplish that.

I prefer the former option and use the built-in Tini provided by Docker. It keeps my Dockerfile free of clutter, and my builds are faster since Docker doesn’t have to fetch Tini from Github. The downside of this approach is that the person running the image is now responsible for including Tini. He or she also has to remember to add the flag on each run. Both have their pros and cons, so choose what you prefer.

How to execute Node.js apps inside Docker containers

To conclude — unless you run your containerised application as PID 1 or through a process manager — your app won’t be able to shut down gracefully.

Avoid using the shell form of the CMD instruction and always use the exec form. Your application will be the primary process instead of running as a child process under bash.

Don’t use npm start in your Dockerfile. NPM is not a process manager and won’t relay signals to your application. The benefit it brings is less relevant in the context of Docker.

Know that when your Node.js application is running as PID 1, it’s treated differently by the Linux kernel. If it doesn’t explicitly handle termination signals, it won’t shut down like it usually would.

Use a process manager, like Tini, as PID 1 to clean up zombie processes if you’re concerned about that. It’s specifically designed to run inside containers, with minimal overhead and no changes to your application.

Write clean code. Stay ahead of the curve.

Every other Tuesday, I share tips on how to build robust Node.js applications. Join a community of developers committed to advancing their careers and gain the knowledge & skills you need to succeed.

Subscribe for success!

Top comments (0)