Concept
The applications usually log to the standard output (stdout) and standard error (stderr) streams instead of writing their logs to files. This is to allow users to view logs of different applications in a simple, standard way.
The kubelet, Kubernetes node agent, is taking care of it, which collects these streams and writes them to a local file behind the scenes so that you can easily access them with Kubernetes. Kubelet writes the pods’ logs to the same node on which the pod is running.
Note that you can only retrieve container logs of pods that are still in existence. When the pod is deleted, its logs are also deleted. To make a pod’s logs available even after the pod is deleted, you need to set up centralised, cluster-wide logging, which stores all the logs in a central store. Tools like FluentD helps in this.
Let’s try to understand this using real life example
Assume a big singing contrast is going on. There you’ll be able to see some singers who will perform, and tools on stage such as the camera to record the contrast show. All the scenes will be captured & stored in a backend storage device which is attached somewhere with a stage.
To understand the concept. Consider the singer as an application, the voice and music as a continuous log, the camera as a kubelet, and the stage as a node that is dedicated solely to this performance. So, when that singer begins to sing, the camera will capture the live data and send it to the data storage device which is attached to the singing contrast stage.
There is a feature or working flow ( that can be called one limitation) with storage devices i.e the time singer completes his/her performance and leaves the contrast stage all the data captured by the camera will delete/removed.
To tackle the above-mentioned limitation, we have to copy the data in Realtime and keep it in another storage device or another location so that we can use it in the future. In this situation, real-time centralized logging enters the scene whose role is to copy the logs from the main storage to other central storage.
Question-Answer
Where do Kubernetes store the logs?
Kubernetes stores the logs in the node where the pods exist.
On which path log resides?
Logs reside on the /var/log/containers path of the node.
Who redirects the logs from stdout/stderr to the file?
The kubelet is the one who does the same.
Who delete the logs after pod termination?
The kubelet deletes the logs after the termination of the pods.
Which command is use to fetch the log?
kubectl logs <podname>
What actually happens in the background when a user executes “Kubectl logs <podname>” command ?
kubectl command calls kubelet service on that node to retrieve the logs.
Conclusion
We’ve discussed the fundamentals of Kubernetes’ log management and how it works. You should now have an understanding of how logging works in Kubernetes.
MtroView team is giving its best & working hard to provide the content to help you understand things better in simple terms. We are heartily open to receiving suggestions to improve our content to help more people in society. Please write to us in the form of a comment or can also send an email to mtroview@gmail.com
Pingback: What Is Pod In Kubernetes? - MtroView