In this section, we will explore how to set up and use the EFK stack (Elasticsearch, Fluentd, and Kibana) for logging in Kubernetes. The EFK stack is a popular choice for centralized logging, allowing you to collect, store, and visualize logs from your Kubernetes cluster.
Overview of EFK Stack
Elasticsearch
- Purpose: A distributed search and analytics engine used for storing and querying log data.
- Key Features:
- Scalability: Can handle large volumes of data.
- Full-text search: Allows complex queries on log data.
- Real-time analytics: Provides near real-time insights.
Fluentd
- Purpose: An open-source data collector that helps unify data collection and consumption.
- Key Features:
- Flexible: Supports various input and output plugins.
- Reliable: Ensures data is delivered even in case of failures.
- Lightweight: Minimal resource usage.
Kibana
- Purpose: A data visualization and exploration tool used for visualizing Elasticsearch data.
- Key Features:
- Interactive: Allows creating dynamic dashboards.
- User-friendly: Intuitive interface for exploring data.
- Real-time: Provides real-time data visualization.
Setting Up EFK Stack in Kubernetes
Step 1: Deploy Elasticsearch
-
Create a namespace for logging:
kubectl create namespace logging
-
Deploy Elasticsearch:
apiVersion: apps/v1 kind: StatefulSet metadata: name: elasticsearch namespace: logging spec: serviceName: "elasticsearch" replicas: 1 selector: matchLabels: app: elasticsearch template: metadata: labels: app: elasticsearch spec: containers: - name: elasticsearch image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1 ports: - containerPort: 9200 name: http - containerPort: 9300 name: transport env: - name: discovery.type value: single-node
-
Create a service for Elasticsearch:
apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: logging spec: ports: - port: 9200 targetPort: 9200 selector: app: elasticsearch
Step 2: Deploy Fluentd
-
Create a ConfigMap for Fluentd configuration:
apiVersion: v1 kind: ConfigMap metadata: name: fluentd-config namespace: logging data: fluent.conf: | <source> @type tail path /var/log/containers/*.log pos_file /var/log/fluentd-containers.log.pos tag kubernetes.* <parse> @type json time_key time time_format %Y-%m-%dT%H:%M:%S.%N%z </parse> </source> <match kubernetes.**> @type elasticsearch host elasticsearch.logging.svc.cluster.local port 9200 logstash_format true logstash_prefix kubernetes logstash_dateformat %Y.%m.%d </match>
-
Deploy Fluentd as a DaemonSet:
apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd namespace: logging spec: selector: matchLabels: app: fluentd template: metadata: labels: app: fluentd spec: containers: - name: fluentd image: fluent/fluentd:v1.11.2 volumeMounts: - name: varlog mountPath: /var/log - name: config-volume mountPath: /fluentd/etc subPath: fluent.conf volumes: - name: varlog hostPath: path: /var/log - name: config-volume configMap: name: fluentd-config
Step 3: Deploy Kibana
-
Deploy Kibana:
apiVersion: apps/v1 kind: Deployment metadata: name: kibana namespace: logging spec: replicas: 1 selector: matchLabels: app: kibana template: metadata: labels: app: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana:7.10.1 ports: - containerPort: 5601 env: - name: ELASTICSEARCH_HOSTS value: "http://elasticsearch.logging.svc.cluster.local:9200"
-
Create a service for Kibana:
apiVersion: v1 kind: Service metadata: name: kibana namespace: logging spec: ports: - port: 5601 targetPort: 5601 selector: app: kibana
Accessing Kibana
- Port-forward the Kibana service:
kubectl port-forward service/kibana 5601:5601 -n logging
- Open Kibana in your browser: Navigate to
http://localhost:5601
.
Practical Exercise
Exercise: Set Up EFK Stack
- Objective: Deploy the EFK stack in your Kubernetes cluster and visualize logs.
- Steps:
- Follow the steps outlined above to deploy Elasticsearch, Fluentd, and Kibana.
- Generate some logs by deploying a sample application.
- Access Kibana and create a dashboard to visualize the logs.
Solution
-
Deploy a sample application:
apiVersion: apps/v1 kind: Deployment metadata: name: sample-app namespace: default spec: replicas: 1 selector: matchLabels: app: sample-app template: metadata: labels: app: sample-app spec: containers: - name: sample-app image: busybox command: ['sh', '-c', 'while true; do echo "Hello, Kubernetes!"; sleep 5; done']
-
Check logs in Kibana:
- Access Kibana using the port-forward command.
- Navigate to the "Discover" tab and search for logs from the
sample-app
.
Common Mistakes and Tips
- Elasticsearch not starting: Ensure you have sufficient resources and the correct image version.
- Fluentd not collecting logs: Verify the Fluentd configuration and ensure the paths are correct.
- Kibana not connecting to Elasticsearch: Check the
ELASTICSEARCH_HOSTS
environment variable and network connectivity.
Conclusion
In this section, we covered the setup and usage of the EFK stack for logging in Kubernetes. You learned how to deploy Elasticsearch, Fluentd, and Kibana, and how to visualize logs from your Kubernetes cluster. This knowledge is crucial for monitoring and troubleshooting applications running in Kubernetes.
Kubernetes Course
Module 1: Introduction to Kubernetes
- What is Kubernetes?
- Kubernetes Architecture
- Key Concepts and Terminology
- Setting Up a Kubernetes Cluster
- Kubernetes CLI (kubectl)
Module 2: Core Kubernetes Components
Module 3: Configuration and Secrets Management
Module 4: Networking in Kubernetes
Module 5: Storage in Kubernetes
Module 6: Advanced Kubernetes Concepts
Module 7: Monitoring and Logging
- Monitoring with Prometheus
- Logging with Elasticsearch, Fluentd, and Kibana (EFK)
- Health Checks and Probes
- Metrics Server
Module 8: Security in Kubernetes
Module 9: Scaling and Performance
Module 10: Kubernetes Ecosystem and Tools
Module 11: Case Studies and Real-World Applications
- Deploying a Web Application
- CI/CD with Kubernetes
- Running Stateful Applications
- Multi-Cluster Management