If you have been working with docker on your local computer for a while, it will be clear to you that it is very easy to get out of control with resources consumption. How? by:
- Forgetting to delete unused images.
- Building a lot of images that left previous versions as untagged.
- Creating networks but not deleting them.
- Forgetting to delete old volumes.
- keeping alive containers that are not needed anymore.
- Forgetting to delete stopped containers.
And the worst can be trying to delete all those resources one by one using
docker rm <container-id>
docker rm <image-id>
...
Specially if the amount of forgotten resources is high.
Another example can be found on running resources, if yo do not limit docker consumption, a container might take all the available memory assigned to docker and left other containers without available resources to start.
Example:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ec4e50fd0c7c app 0.00% 18.54MiB / 1GiB 1.81% 1.18kB / 0B 0B / 0B 7
619ebfc20ea5 docker_app_2 0.00% 15.06MiB / 3.841GiB 0.38% 1.08kB / 0B 0B / 0B 7
f71e74b6b434 docker_app_1 0.00% 23.46MiB / 3.841GiB 0.60% 1.49kB / 0B 40.8MB / 0B 7
7b73271e3b88 docker_db_1 0.41% 181.5MiB / 3.841GiB 4.61% 1.6kB / 0B 76.7MB / 4.19MB 33
This is the output of the docker stats
command, here we see that 3 of the containers have available 4GB of RAM, even thought the actual consumption is low.
Suppose that some of this container has a steady peak on memory consumption it might affect new containers, it will be easy to limit this container instead of let it affect others.
Here i will present you some commands that can help you with these scenarios.
Delete stopped containers
NOTE this commands are for MAC and using zsh
First, you can remove stopped containers on one command by running:
docker rm -f $(docker ps -aq -f "status=exited")
Let see the next scenario:
☁ docker [master] ⚡ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78ecc186fcd0 nginx "/docker-entrypoint.…" 5 seconds ago Up 4 seconds 80/tcp trusting_moore
☁ docker [master] ⚡ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78ecc186fcd0 nginx "/docker-entrypoint.…" 8 seconds ago Up 7 seconds 80/tcp trusting_moore
8aed52668f12 nginx "/docker-entrypoint.…" 44 seconds ago Exited (0) 21 seconds ago confident_yonath
4179cfaae9a3 nginx "/docker-entrypoint.…" 45 seconds ago Exited (0) 21 seconds ago objective_leavitt
079353bc453c nginx "/docker-entrypoint.…" 46 seconds ago Exited (0) 21 seconds ago admiring_payne
f671b0464a65 nginx "/docker-entrypoint.…" 48 seconds ago Exited (0) 21 seconds ago intelligent_grothendieck
☁ docker [master] ⚡
Here there is one docker container running but there are 4 docker container stopped.
you can run the command provided and observe what happens:
☁ docker [master] ⚡ docker rm -f $(docker ps -aq -f "status=exited")
8aed52668f12
4179cfaae9a3
079353bc453c
f671b0464a65
☁ docker [master] ⚡ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78ecc186fcd0 nginx "/docker-entrypoint.…" 2 minutes ago Up About a minute 80/tcp trusting_moore
☁ docker [master] ⚡ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78ecc186fcd0 nginx "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 80/tcp trusting_moore
☁ docker [master] ⚡
The command docker ps -aq -f "status=exited"
inside the () will return the list of container id of the containers that are stopped.
- The -a flag: shows all containers in any state.
- The -q flag: will return only the Container ID
- The -f flag: is for filter expressions
Notice that on docker rm -f
there is also a -f flag, but this time the flag corresponds to force
Then this list will be sent to the command docker rm -f
And it will delete the 4 containers stopped.
As a reminder, this is on zsh, on bash the command is
docker rm -f ${docker ps -aq -f "status=exited"}
Delete unused resources as a whole
NOTE this commands are for MAC and using zsh
The previous commands can also be used for volumes and networks but with small variations.
docker volume rm -f $(docker volume ls -f "dangling=true")
Dangling volumes are volumes that exist but are no longer connected to any container
☁ docker [master] ⚡ docker volume rm -f $(docker volume ls -f "dangling=true")
DRIVER
VOLUME
NAME
local
9c08f79b2c407d605fd5a59bed78bb936055e064562f74e0f991ee86acf146e9
local
28f5e546b4216720ebdbce3addec7684fe2595a9dd46fc06e980c337bf94fad0
local
48c50ca9e4fc58b4f5c0e011bf56ae72f6cb8ca6d394d12a331c354925159d20
☁ docker [master] ⚡ docker volume ls -f "dangling=true"
DRIVER VOLUME NAME
☁ docker [master] ⚡
And for networks
docker network rm <network-id>
This is the option to delete a specific network.
There are also other options to delete unused resources
docker container prune
docker volume prune
docker network prune
docker image prune
Removes all stopped containers, all unbound volumes, all dangling images and all unused networks BUT even this requires executing command one by one.
There is an easier way with a single command:
☁ docker [master] ⚡ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N] y
Remove all unused containers, networks, images (both dangling and unreferenced), and optionally, volumes.
By default, volumes are not removed to prevent important data from being deleted if there is currently no container using the volume. Use the --volumes
Limit container resource consumption
Previously we stated that a container can get out of limit on resources, there are certain commands you can use to avoid getting out of control
NOTE this commands are for MAC and using zsh
docker run -d --name app --memory 1g nginx
This command will run a nginx container but with a max limit of 1GB for memory.
Note the mim memory that you can assign is 4MB
then by running docker stats
you can see these limits
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ec4e50fd0c7c app 0.00% 18.54MiB / 1GiB 1.81% 1.11kB / 0B 0B / 0B 7
Other runtime constrains can be found on the official documentation:
Runtime constraints on resources
You can limit aspects as: *number of CPUs, swap limit, IO weight`
Final recommendations
If you are going to work with docker on your local computer, try to use docker-compose some of the benefits i have found with compose are:
Deployment of several containers at once
docker-compose up -d
Scale the amount of containers with one command
docker-compose up -d --scale app=2
Note:if you are using port definition inside docker-compose.yml file remember to assign a range of port instead on a mapping 1:1 otherwise you will receive an error because 2 containers cannot run on the same port
- Easier to work with teams, they do not need to change the docker compose file. Each member can create a docker-compose.override.yaml file on local machine and only add changes that they want to test.
Note you can add this .override.yml file to the .gitingnore
file and avoid sending unwanted changes to your repo.
More about override on : Runtime constraints on resources
Top comments (0)