So you figured out that the pids of your Kubernetes (or Docker) containers show up on your host:
$ ps auxw | fgrep python
root 11282 0.0 0.0 94628 24372 ? Ss 04:53 0:00 python app.py
root 11439 0.0 0.0 41968 13072 ? Ss 04:53 0:00 python event-simulator.py
root 12527 0.0 0.0 6796 2292 pts/0 S+ 04:57 0:00 grep -F --color=auto python
But what Kubernetes pod owns these processes?
As far as my knowledge extends, there is not a trivial way to use a native command to figure it out. You have to put the pieces of the puzzles by yourself.
The usual "hack" I see most people do is:
$ nsenter -t 11282 -u hostname
webapp-color
$ nsenter -t 11439 -u hostname
e-com-1123
But that does not give me the Kubernetes namespace.
I did some random googling and couldn't figure out someone else that is as annoyed as myself for this, so I decided to write this blog post.
TL:DR edition:
$ for i in 11282 11439; do crictl inspect --output go-template --template '{{index .status.labels "io.kubernetes.pod.namespace" }}:{{index .status.labels "io.kubernetes.pod.name" }}' $( head -n1 /proc/$i/cgroup | cut -f5 -d':' ) ; done
default:webapp-color
e-commerce:e-com-1123
If you want to learn how and why it works, keep reading.
You probably read somewhere that what makes containers real are two Linux resources: cgroups and namespaces.
Let's go back to the "hack":
$ nsenter -t 11439 -u hostname
e-com-1123
The Linux command nsenter allows you to run a command inside the namespace of another process:
$ nsenter -h
Run a program with namespaces of other processes.
...
-u, --uts[=<file>] enter UTS namespace (hostname etc)
...
So, the hack uses nsenter -u, which in turns run a command inside the Linux UTS Namespace, which allows us to set hostnames and domain names without affecting the rest of the system.
The UTS namespace is definitely not exactly the most recognizable; when people think of namespaces, usually they remember the PID namespace, responsible to provide an isolated process ID space for the container, or the Network namespace, which in turn provides an isolated network stack. But yeah, there are a bunch of other ones, as you can check out on the links in this paragraph. Or in case you're lazy, here is an extended output of the nsenter -h output:
$ nsenter -h
Usage:
nsenter [options] [<program> [<argument>...]]
Run a program with namespaces of other processes.
...
-a, --all enter all namespaces
-t, --target <pid> target process to get namespaces from
-m, --mount[=<file>] enter mount namespace
-u, --uts[=<file>] enter UTS namespace (hostname etc)
-i, --ipc[=<file>] enter System V IPC namespace
-n, --net[=<file>] enter network namespace
-p, --pid[=<file>] enter pid namespace
-C, --cgroup[=<file>] enter cgroup namespace
-U, --user[=<file>] enter user namespace
-T, --time[=<file>] enter time namespace
That was a fun one! But what about cgroups?
Well, cgroups allow us to limit the use of system resources like CPU, memory and network bandwith.
You can see the cgroups of a process with an easy ps*:
$ ps -wwo cgroup= 11439
12:hugetlb:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a,11:cpu,cpuacct:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a,10:memory:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a,9:perf_event:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a,8:net_cls,net_prio:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a,7:cpuset:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a,6:pids:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a,5:blkio:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a,4:devices:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a,3:rdma:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a,2:freezer:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a,1:name=systemd:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
Ouch, that was ugly. Let's prettify the output:
ps -wwo cgroup= 11439 | tr , '\n'
12:hugetlb:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
11:cpu
cpuacct:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
10:memory:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
9:perf_event:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
8:net_cls
net_prio:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
7:cpuset:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
6:pids:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
5:blkio:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
4:devices:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
3:rdma:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
2:freezer:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
1:name=systemd:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
Much better.
But truth be told, it is basically the same output of this simple cat command:
$ cat /proc/11439/cgroup
12:hugetlb:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
11:cpu,cpuacct:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
10:memory:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
9:perf_event:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
8:net_cls,net_prio:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
7:cpuset:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
6:pids:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
5:blkio:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
4:devices:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
3:rdma:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
2:freezer:/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
1:name=systemd:/system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
0::/
What each of them do? Well, you can read this from the Linux Kernel itself, and that's not the focus of today's post.
We need to figure out which Kubernetes pod (and it namespace!) is the owner of that proces!
If you pay attention, you'll notice an interesting pattern on the cgroups naming:
$ cat /proc/11439/cgroup | cut -f3- -d':' | sort | uniq -c
1 /
6 /kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
6 /system.slice/containerd.service/kubepods-besteffort-pod86811ae3_b633_4eb8_a508_e3eae190f6ce.slice:cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
Lets discard the part of the hierarchy that we do not care that much about:
$ cat /proc/11439/cgroup | cut -f4- -d':' | sort | uniq -c
1
12 cri-containerd:da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
That character sequence after the ':' seems awfully familiar:
$ crictl ps --id da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
da1bdd84c8b25 ee4be8f9dfd10 7 minutes ago Running simple-webapp 0 48ee11c897fa8 e-com-1123
Here it is! The pod name!
But what about its namespace?
I'm not sure if there is an easy way to make crictl return this to you. I know the annoying one: formatting its output using go-template!
Let's see what we have:
$ crictl inspect da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a | jq '. | keys'
[
"info",
"status"
]
Let's go deeper:
$ crictl inspect da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a | jq '.info | keys'
[
"config",
"pid",
"removing",
"runtimeOptions",
"runtimeSpec",
"runtimeType",
"sandboxID",
"snapshotKey",
"snapshotter"
]
$ crictl inspect da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a | jq '.status | keys'
[
"annotations",
"createdAt",
"exitCode",
"finishedAt",
"id",
"image",
"imageId",
"imageRef",
"labels",
"logPath",
"message",
"metadata",
"mounts",
"reason",
"resources",
"startedAt",
"state"
]
I have a hunch we want to see what is stored on .status.labels:
$ crictl inspect da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a | jq '.status.labels'
{
"io.kubernetes.container.name": "simple-webapp",
"io.kubernetes.pod.name": "e-com-1123",
"io.kubernetes.pod.namespace": "e-commerce",
"io.kubernetes.pod.uid": "86811ae3-b633-4eb8-a508-e3eae190f6ce"
}
Oh yeah! That is what we want! We have a pod called e-com-1123 running the process. It is in the Kubernetes namespace e-commerce!
Let's create a go-template from it. This is how you access the members of a map:
$ crictl inspect --output go-template --template '{{.status.labels}}' da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
map[io.kubernetes.container.name:simple-webapp io.kubernetes.pod.name:e-com-1123 io.kubernetes.pod.namespace:e-commerce io.kubernetes.pod.uid:86811ae3-b633-4eb8-a508-e3eae190f6ce]
As you can imagine, if a member of a map has a '.' on its name, you'll incurr into problems:
$ crictl inspect --output go-template --template '{{.status.labels.io.kubernetes.pod.name}}' da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
FATA[0000] getting the status of the container "da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a": failed to template data: template: tmplExecuteRawJSON:1:9: executing "tmplExecuteRawJSON" at <.status.labels.io.kubernetes.pod.name>: map has no entry for key "io"
No worries, just use one of the very few builtin go-template functions:
$ crictl inspect --output go-template --template '{{index .status.labels "io.kubernetes.pod.name" }}' da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
e-com-1123
We are halfway there:
$ crictl inspect --output go-template --template '{{index .status.labels "io.kubernetes.pod.namespace" }}:{{index .status.labels "io.kubernetes.pod.name" }}' da1bdd84c8b25938081afe48da7075e2a211d2b1a62e01c894b4e5f3ffab670a
e-commerce:e-com-1123
And now, turning into a one-liner:
$ for i in $( pgrep python -d ' ' ); do crictl inspect --output go-template --template '{{index .status.labels "io.kubernetes.pod.namespace" }}:{{index .status.labels "io.kubernetes.pod.name" }}' $( head -n1 /proc/$i/cgroup | cut -f5 -d':' ) ; done
default:webapp-color
e-commerce:e-com-1123
Here is the answer I've been looking: how to figure out from which namespace and pod a process on my host is running.
I admit making it a oneliner didn't help all that much with legibility, so I will write a script in hopes to make it easier to understand:
$ cat which-pod-am-i.sh
#!/bin/bash
# This is NOT a production script!
# This was made just to make the techniques legible!
# Pieces of go templates.
# We need to retrieve two fields from .status.labels:
# .status.labels."io.kubernetes.pod.namespace"
NAMESPACE_TMPL='{{index .status.labels "io.kubernetes.pod.namespace" }}'
# .status.labels."io.kubernetes.pod.name"
POD_TMPL='{{index .status.labels "io.kubernetes.pod.name" }}'
# I'm passing the process name from command line:
for i in $( pgrep ${1?Parameter not passed} -d ' ' ); do
# Get the container ID from the cgroups:
CONTAINER_ID=$( head -n1 /proc/$i/cgroup | cut -f5 -d':' )
# Inspect with go templates:
crictl inspect --output go-template \
--template "$NAMESPACE_TMPL:$POD_TMPL" \
${CONTAINER_ID:?Container not found}
done
$ ./which-pod-am-i.sh python
default:webapp-color
e-commerce:e-com-1123
By the way, all these commands were run on KodeKloud Sandbox provided with their Udemy CKAD course. Shout out to them for providing an incredibly
affordable and non-expirable hands-on resource to study for CNCF CKA/CKAD certifications.
Top comments (0)