30 Aug 2016
Developers, system administrators, and other stakeholders need to access system logs on a daily (sometimes even hourly) basis.
Logs from a couple of servers are easy to generate and handle. But, imagine a Kubernetes cluster with several pods running. Handling and storing logs in itself becomes a huge task.
How does the system administrator collect, manage, and query the logs of the system pods? How does a user query the logs of their application, which is composed of many pods, all of which may be restarted or automatically created by Kubernetes?
Thankfully, Kubernetes supports cluster level logging.
Cluster level logging allows collecting logs for all the pods in the cluster, at a centralized location, with options to search and view the logs.
There are two ways you can use the cluster level logging: either via Google Cloud Logging or Elasticsearch and Kibana.
It is important to note here that the default logging components may vary based on the Kubernetes distribution you are using. For example, if you are running Kubernetes on a Google Container Engine (GKE) cluster, you'll have Google Cloud Logging. But if you're running your Kubernetes cluster on AWS EC2, you'll have Elasticsearch and Kibana available by default.
In this post we'll focus on Kubernetes on an AWS EC2 instance. We'll learn how to set up a Kubernetes cluster on an AWS EC2 instance and ingest logs into Elasticsearch and view them using Kibana.
But first, let's make some introductions.