1、helm upgrade后kibana采集不到数据,需要删除后重新INSTALL然后测试。这里打开了externalAccess参数

# helm upgrade kafka -n logging .
# helm -n logging delete kafka //kibana采集不到容器数据
# vim /root/EFK/k8s/efk-7.10.2/kafka/kafka/values.yaml
//bitnami/kafka:2.8.0-debian-10-r30
externalAccess:
## Enable Kubernetes external cluster access to Kafka brokers
enabled: true
service:
## Service type. Allowed values: LoadBalancer or NodePort
type: NodePort
port: 9094
## Array of node ports used for each Kafka broker. Length must be the same as replicaCount
## Example:
## nodePorts:
## - 30001
## - 30002
nodePorts: [30001]
## Use worker host ips
useHostIPs: true
externalZookeeper:
## Server or list of external zookeeper servers to use.
servers: zookeeper
# helm -n logging install kafka . //部署后立即kibana可以采集到容器数据
# helm -n logging list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
kafka logging 1 2022-12-05 22:33:59.787183138 +0800 CST deployed kafka-12.20.0 2.8.0
[root@k8s-master01 kafka]# k -n logging get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka ClusterIP 10.16.35.19 <none> 9092/TCP 19m
kafka-0-external NodePort 10.16.102.31 <none> 9094:30001/TCP 19m
kafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 19m

***:集群内主机例如node04上运行filebeat进程,hosts:配置使用192.168.31.217:30001,10.16.102.31:9094正常;10.16.35.19:9092不行。即配置了external之后,服务kafka和对应9092端口不能用了,服务kafka-0-external和对应9094端口能用。

k8s笔记9(集群外beat->公共kafka集群:OK) _TCP

***:有时候kibana采集不到数据,helm delete kafka然后再install就好了!

——很可能是filebeat刚运行突发采集日志量过大造成kafka阻塞,不知道等一等会不会自动回复,测试删除重建pod:kafka-0可以恢复

2、在AirNet-FDP1上使用filebeat采集日志,集群外主机使用kafka-0-external服务(hostIP:30001)。

filebeat.inputs:
- input_type: log
paths:
- /home/cdatc/AirNet/bin/log/FDP1_fdp_*.log # ---OK,可以使用通配符*指定某类日志文件
# - /home/cdatc/AirNet/bin/log/FDP1_scc_20221205_03.log # ---OK,指定1个日志文件
# - /home/cdatc/AirNet/bin/log/* # ---OK,目录下所有文件,包括归档tar.gz文件
fields:
# to_test: "kafka-clusterIP: 10.16.35.19:9092" # ---集群内node04上filebeat进程 NOK
to_test: "kafka-0-external NodePort: 192.168.31.217:30001" # ---集群外AirNet-FDP1上filebeat进程:OK
output.kafka:
hosts: ["192.168.31.217:30001"]
topic: "filebeat-mi"
codec.json:
pretty: false
keep_alive: 30s
# ./filebeat -e -d "*" # ---kibana采集日志正常

3、删除ES数据,在kibana控制台的Dev Tools执行命令:​​控制台 | Kibana 用户手册 | Elastic​

DELETE /logstash*            #删除logstash索引的所有数据
POST /filebeat-*/_delete_by_query #删除查询结果的数据
{
"query": {
"range": {
"@timestamp": {
"lt": "now-1d",
"format": "epoch_millis"
}
}
}
}

4、测试:helm在public-service命名空间部署kafka集群,同命名空间创建pod Kafka client测试。

# helm -n public-service upgrade kafka .
Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
kafka.public-service.svc.cluster.local
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
kafka-0.kafka-headless.public-service.svc.cluster.local:9092
kafka-1.kafka-headless.public-service.svc.cluster.local:9092
kafka-2.kafka-headless.public-service.svc.cluster.local:9092
To connect to your Kafka server from outside the cluster, follow the instructions below:
Kafka brokers domain: Use your provided hostname to reach Kafka brokers, kafka.atc.com
Kafka brokers port: You will have a different port for each Kafka broker starting at 909
# k -n public-service get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka ClusterIP 10.16.192.38 <none> 9092/TCP 30d
kafka-0-external ClusterIP 10.16.135.56 <none> 9094/TCP 4h26m
kafka-1-external ClusterIP 10.16.171.62 <none> 9094/TCP 4h26m
kafka-2-external ClusterIP 10.16.82.238 <none> 9094/TCP 4h26m
kafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 30d
#kubectl run kafka-client --restart='Never' --image core.harbor.domain/mizy/kafka:3.3.1-debian-11-r1 --namespace public-service --command -- sleep infinity
#kubectl exec --tty -i kafka-client --namespace public-service -- bash
kafka-client:/$ kafka-console-producer.sh --broker-list kafka-0.kafka-headless.public-service.svc.cluster.local:9092 --topic test
//以上使用 kafka,kafka-headless以及对应IP可以,kafka-0(1/2)-external及对应IP不行。
kafka-client:/$ kafka-console-consumer.sh --bootstrap-server kafka.public-service.svc.cluster.local:9092 --topic test --from-beginning

***:使用logging命名空间下的pod(sidecar:filebeat),configmap配置filebeat.yml中hosts改为使用 public-service命名空间的kafka["kafka.public-service.svc.cluster.local:9092"],测试OK。

***:但是集群内例如node04节点,filebeat进程使用public-service命名空间的kafka服务对应的CLUSTER-IP不行(node04能ping通)。

# k -n logging edit cm logstash-configmap             //首先将logstash改为使用public-service命名空间下的kafka
bootstrap_servers => "kafka:9092"
----->"kafka:9092"改为 "kafka.public-service.svc.cluster.local:9092"
# k -n logging edit cm filebeatconf
apiVersion: v1
data:
filebeat.yml: |-
filebeat.inputs:
- input_type: log
paths:
- /data/log/*/*.log
tail_files: true
output.kafka:
hosts: ["kafka.public-service.svc.cluster.local:9092"] //使用该configmap的POD测试OK

5、命名空间public-service下kafka集群改为type: NodePort通过helm upgrade后测试:集群内外filebeat进程采集日志均OK!

# k -n public-service get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka ClusterIP 10.16.192.38 <none> 9092/TCP 30d
kafka-0-external NodePort 10.16.135.56 <none> 9094:31001/TCP 5h19m
kafka-1-external NodePort 10.16.171.62 <none> 9094:31002/TCP 5h19m
kafka-2-external NodePort 10.16.82.238 <none> 9094:31003/TCP 5h19m
kafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 30d
配置filebeat.yml项hosts: ["192.168.31.216:31002"] ["192.168.31.218:31001"] ["192.168.31.216:31003"]测试均OK!

6、kibana的索引文件名称:miflie-2022.12.06,定义index前缀“miflie-”在Config Maps的logstash.conf中index => \"miflie-%{+YYYY.MM.dd}\"

  "logstash.conf": "# all input will come from filebeat, no local logs  
output {
stdout{ codec=>rubydebug}
if [type] == \"filebeat-mi\"{
elasticsearch {
hosts => [\"elasticsearch-logging-0.elasticsearch-logging:9200\",\"elasticsearch-logging-1.elasticsearch-logging:9200\"]
index => \"miflie-%{+YYYY.MM.dd}\"
}
} else{
elasticsearch {
hosts => [\"elasticsearch-logging-0.elasticsearch-logging:9200\",\"elasticsearch-logging-1.elasticsearch-logging:9200\"]
index => \"other-input-%{+YYYY.MM.dd}\"
}
}
}
filebeat配置 topic: \"filebeat-mi\",作为消息生产者送到kafka,logstash消费该消息,判断topic路由到elasticsearch,并定义index名
{
"filebeat.yml": "filebeat.inputs:
- input_type: log
paths:
- /data/*.log
tail_files: true
fields:
to_test: \"kafka:9092\"
output.kafka:
hosts: [\"kafka:9092\"]\t
topic: \"filebeat-mi\"
codec.json:
pretty: false
keep_alive: 30s"
}
  • filebeat配置 topic: \"filebeat-mi\",作为消息生产者送到kafka,logstash消费该消息,判断topic路由到elasticsearch,并定义index名(即kibana的索引文件名称),