Logs analytics is a challenge that interested many people.
We see it directly from the increasing number of startups and initiatives.
Big companies build their solutions too.
Since there is no native storage for Kubernetes logs solution, we will look at 3 approaches:
Basic logging in Kubernetes
Output the logs to the standard output stream.
This can be configured from the YAML file.
Logging at the node level
Have the stdout or stderr redirected to some storage or file by a container engine.
This has pros and cons. From K8s:
" if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs."
The responsibility of the logs is on us. Hence, we need to define Log Rotation.
Log Rotator mechanism is in charge of: rotate, compress and send . This is usually a daemon that is scheduled to run every X minutes.
Cluster-level logging architectures
Due to the fragility of pods, containers and nodes in distributed systems, we should consider cluster-level-logging.
cluster-level-logging requires a separate backend to store, analyze and query the logs.
Some options are: ( from Kubernetes.io)
- Use a node-level logging agent that runs on every node.
- Include a dedicated sidecar container for logging in an application pod.
- Push logs directly to a backend from within an application.
This was Kubernetes logs in under 2 minutes!
Top comments (0)