K8S集群部署ELK的步骤如下:

步骤 | 说明
---- | ------
1. 创建Elasticsearch服务 | 创建Elasticsearch服务,用于存储和索引日志数据
2. 部署Kibana服务 | 部署Kibana服务,用于可视化日志数据
3. 部署Logstash服务 | 部署Logstash服务,用于收集、过滤和转发日志数据
4. 配置Logstash Pipeline | 配置Logstash Pipeline,定义数据收集和处理规则
5. 创建一个Deployment | 创建一个Deployment来运行Elasticsearch、Kibana和Logstash
6. 创建一个Service | 创建一个Service来提供Elasticsearch、Kibana和Logstash的访问

下面是每一步需要执行的代码示例和注释:

### 1. 创建Elasticsearch服务
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
resources:
limits:
memory: "2Gi"
requests:
memory: "1Gi"
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
```
代码注释:
- 创建一个名为elasticsearch的Deployment,使用Elasticsearch 7.3.2官方镜像
- 设置资源限制,请求1GB内存,限制2GB内存
- 暴露9200端口用于REST API,暴露9300端口用于集群间通信

### 2. 部署Kibana服务
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.3.2
resources:
limits:
memory: "1Gi"
requests:
memory: "512Mi"
ports:
- containerPort: 5601
name: http
protocol: TCP
```
代码注释:
- 创建一个名为kibana的Deployment,使用Kibana 7.3.2官方镜像
- 设置资源限制,请求512MB内存,限制1GB内存
- 暴露5601端口用于HTTP访问Kibana

### 3. 部署Logstash服务
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
labels:
app: logstash
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.3.2
volumeMounts:
- mountPath: /usr/share/logstash/pipeline
name: logstash-pipeline
ports:
- containerPort: 5044
name: beats
protocol: TCP
volumes:
- name: logstash-pipeline
configMap:
name: logstash-pipeline-config
items:
- key: logstash.conf
path: logstash.conf
```
代码注释:
- 创建一个名为logstash的Deployment,使用Logstash 7.3.2官方镜像
- 使用Volume将logstash.conf配置文件挂载到容器的/usr/share/logstash/pipeline路径
- 暴露5044端口用于接收Beats协议的日志数据

### 4. 配置Logstash Pipeline
```
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-pipeline-config
data:
logstash.conf: |
input {
beats {
port => 5044
}
}

filter {
# 添加过滤规则
}

output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logs-%{+YYYY.MM.dd}"
}
}
```
代码注释:
- 创建一个名为logstash-pipeline-config的ConfigMap,用于存储Logstash Pipeline的配置文件
- 在logstash.conf配置文件中,通过beats输入插件监听5044端口接收Beats协议的日志数据
- 通过filter过滤插件添加过滤规则
- 通过elasticsearch输出插件将处理后的日志数据发送到Elasticsearch的9200端口

### 5. 创建一个Deployment
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: elk
spec:
replicas: 1
selector:
matchLabels:
app: elk
template:
metadata:
labels:
app: elk
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
resources:
limits:
memory: "2Gi"
requests:
memory: "1Gi"
ports:
- containerPort: 9200
name: rest
protocol: TCP
- name: kibana
image: docker.elastic.co/kibana/kibana:7.3.2
resources:
limits:
memory: "1Gi"
requests:
memory: "512Mi"
ports:
- containerPort: 5601
name: http
protocol: TCP
- name: logstash
image: docker.elastic.co/logstash/logstash:7.3.2
volumeMounts:
- mountPath: /usr/share/logstash/pipeline
name: logstash-pipeline
ports:
- containerPort: 5044
name: beats
protocol: TCP
volumes:
- name: logstash-pipeline
configMap:
name: logstash-pipeline-config
items:
- key: logstash.conf
path: logstash.conf
```
代码注释:
- 创建一个名为elk的Deployment,将Elasticsearch、Kibana和Logstash运行在同一个Pod中
- 设置资源限制和请求量
- 暴露相应的端口

### 6. 创建一个Service
```
apiVersion: v1
kind: Service
metadata:
name: elk
spec:
selector:
app: elk
ports:
- name: elasticsearch
port: 9200
targetPort: 9200
- name: kibana
port: 5601
targetPort: 5601
- name: logstash
port: 5044
targetPort: 5044
```
代码注释:
- 创建一个名为elk的Service,用于提供对Elasticsearch、Kibana和Logstash的访问
- 通过选择器app: elk将Service与该名称下所有Pods关联起来
- 暴露相应的端口用于访问Elasticsearch、Kibana和Logstash

通过以上步骤,我们可以在Kubernetes集群上部署ELK,并实现日志的收集、存储和可视化。小伙伴们可以根据自己的实际需求对上述代码进行修改和扩展,以满足自己的日志分析和监控需求。