When you wish to have your logs sent to Elasticsearch and browse them with Kibana you should add to your k8s cluster EFK stack.
My first attempt to install EFK was with https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
However it was fluentd was failing to send logs to Elasticsearch.
Fortunately on digitalocean website there is a blog post describing on how to do it and it happens to be working on a dind cluster. https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes
All the config posted over there is valid except for few caveats related to how dind stores logs. To make fluentd pods have access to logs in fluentd.yaml add in volumeMounts:
1 2 |
- name: dind-docker-containers mountPath: /dind/docker/containers |
And in volumes:
1 2 3 |
- name: dind-docker-containers hostPath: path: /dind/docker/containers |
Also in elasticsearch-statefulset.yaml decrease required storage space and match storageClassName to the one that exists in the cluster in volumeClaimTemplates:
1 2 3 4 5 6 |
spec: accessModes: [ "ReadWriteOnce" ] storageClassName: local-path resources: requests: storage: 1Gi |
Here local-path provisioner has been used and storage is set to a low value like 1Gi.
Local-path provisioner has been added to the cluster (https://github.com/rancher/local-path-provisioner)
1 |
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml |
Now EFK can be onboarded onto a cluster as follows:
1 2 3 4 5 |
kubectl apply -f kube-logging.yaml kubectl apply -f elasticsearch-svc.yaml kubectl apply -f elasticsearch-statefulset.yaml kubectl apply -f kibana.yaml kubectl apply -f fluentd.yaml |
Check if you can access elasticsearch and kibana:
1 |
kubectl port-forward es-cluster-0 9200:9200 --namespace=kube-logging |
1 |
kubectl port-forward kibana-598dc944d9-plsbv 5601:5601 --namespace=kube-logging |
After port-forward provide container name relevant to your cluster. Check access on a host running kubectl port-forward
1 |
http://localhost:9200/_cluster/state?pretty |
1 |
http://localhost:5601/ |
Now your system logs (not related to user pods, like kube-apiserver) should be fed by fluentd to elasticsearch and kibana can read them for es backend. You can have workload pods logs logs flowing into elasticsearch or build a dedicated instance of EFK just for that purpose to segregate system vs workload logging.