EFK 是一种常见的日志收集和处理解决方案,它由 Elasticsearch、Fluentd 和 Kibana 这三个主要组件组成。在Kubernetes集群中部署EFK可以帮助我们更好地管理和分析日志数据。本文将介绍Kubernetes部署EFK的步骤以及每一步需要做的事情,并提供相应的代码示例。
整体流程
----------
下面是部署EFK的整体流程,共分为5个步骤:
1. 创建Namespace和ServiceAccount
2. 部署Elasticsearch
3. 部署Fluentd
4. 部署Kibana
5. 配置日志应用
具体步骤
----------
下面是每个步骤需要做的事情以及相应的代码示例:
##### 步骤1:创建Namespace和ServiceAccount
在Kubernetes集群中创建一个Namespace用于部署EFK,并创建一个ServiceAccount用于访问Kubernetes API。
```yaml
# 创建namespace
apiVersion: v1
kind: Namespace
metadata:
name: logging
# 创建ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: logging
namespace: logging
```
##### 步骤2:部署Elasticsearch
部署一个Elasticsearch实例,并通过Service暴露对外访问。
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: logging
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
env:
- name: discovery.type
value: single-node
ports:
- containerPort: 9200
name: http
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: logging
spec:
selector:
app: elasticsearch
ports:
- port: 9200
targetPort: 9200
```
##### 步骤3:部署Fluentd
部署一个Fluentd实例,并将其配置为收集集群中所有Pod的日志并发送到Elasticsearch。
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: logging
labels:
k8s-app: fluentd-logging
data:
fluent.conf: |
@type elasticsearch
host elasticsearch.logging.svc.cluster.local
port 9200
logstash_format true
flush_interval 5s
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: logging
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
template:
metadata:
labels:
k8s-app: fluentd-logging
spec:
serviceAccount: logging
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.12
env:
- name: FLUENTD_ARGS
value: -q
volumeMounts:
- name: config-volume
mountPath: /fluentd/etc
volumes:
- name: config-volume
configMap:
name: fluentd-config
```
##### 步骤4:部署Kibana
部署一个Kibana实例,并通过Service暴露对外访问。
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: logging
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.6.2
env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
name: http
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: logging
spec:
selector:
app: kibana
ports:
- port: 5601
targetPort: 5601
```
##### 步骤5:配置日志应用
在需要收集日志的应用中添加Fluentd的日志输出配置。
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: logging
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app-image:latest
ports:
- containerPort: 8080
---
# 添加Fluentd的日志输出配置
volumeMounts:
- name: var-log
mountPath: /var/log/my-app
...
volumes:
- name: var-log
emptyDir: {}
```
以上就是部署EFK的完整流程以及每个步骤需要做的事情以及相应的代码示例。希望这篇文章可以帮助你快速了解如何在Kubernetes中部署EFK,并实现对关键词的收集和分析。
参考链接:
- [Elasticsearch官方文档](https://www.elastic.co/guide/en/elasticsearch/reference/7.6/deployment.html)
- [Fluentd官方文档](https://docs.fluentd.org/v/0.12/kubernetes/fluentd-daemonset-elasticsearch)