一、需求:在k8s系统下每次更新应用,pod会删除,日志文件也会跟着删除,为了实现日志的可视化,持久化,可监控,需要搭建一套日志系统。
二、方案
log-pilot + elasticsearch + kibana
log-pilot:收集日志
elasticsearch:存储日志
kibana:查看日志
三、部署
1、log-pilot部署
参考:https://github.com/FarseerNet/log-pilot
说明:这里部署并不是采用log-pilot官方的镜像,因为官方的镜像只支持es6,es6没有账号认证,用这个第三方镜像es7和8版本都支持,这个第三方镜像有个坑,作者是用把es的信息用加密的方式写入配置文件的,用这种方式会有连不上es的时候,我进入log-pilot容器里看,filebeat.yaml配置文件里面没有output.elasticsearch配置,所以我把es信息改成了环境变量。
log-pilot.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: log-pilot
labels:
app: log-pilot
k8s.kuboard.cn/layer: cloud
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: log-pilot
template:
metadata:
name: log-pilot
labels:
app: log-pilot
spec:
# 是否允许部署到Master节点上
# tolerations:
# - key: node-role.kubernetes.io/master
# effect: NoSchedule
containers:
- name: log-pilot
image: farseernet/log-pilot:7.x
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 200Mi
requests:
cpu: 200m
memory: 200Mi
securityContext:
capabilities:
add:
- SYS_ADMIN
env:
- name: "LOGGING_OUTPUT"
value: "elasticsearch"
- name: "ELASTICSEARCH_HOSTS"
value: "es-es-http:9200"
- name: "ELASTICSEARCH_USER"
value: "admin"
- name: "ELASTICSEARCH_PASSWORD"
value: "123456"
- name: "NODE_NAME"
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: sock
mountPath: /var/run/docker.sock
- name: root
mountPath: /host
readOnly: true
- name: varlib
mountPath: /var/lib/filebeat
- name: varlog
mountPath: /var/log/filebeat
- name: localtime
mountPath: /etc/localtime
readOnly: true
volumes:
- name: sock
hostPath:
path: /var/run/docker.sock
- name: root
hostPath:
path: /
- name: varlib
hostPath:
path: /var/lib/filebeat
type: DirectoryOrCreate
- name: varlog
hostPath:
path: /var/log/filebeat
type: DirectoryOrCreate
- name: localtime
hostPath:
path: /etc/localtime
kubectl apply -f log-pilot.yaml
2、elasticsearch和kibana部署
参考:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html
说明:这里es和kibana的部署方式是采用官方的部署方式,部署在k8s集群里,生产环境建议使用docker-compose单独部署,隔离生产环境,因为es消耗资源很大,docker-compose部署的话需要修改log-pilot.yaml文件中es的相关信息。
kubectl create -f https://download.elastic.co/downloads/eck/2.4.0/crds.yaml
kubectl apply -f https://download.elastic.co/downloads/eck/2.4.0/operator.yaml
es.yaml:这里使用了nfs数据持久化方案,请参考nfs文章。
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: es
spec:
version: 8.3.3
http:
tls:
selfSignedCertificate:
disabled: true
service:
spec:
type: NodePort
ports:
- name: http
port: 9200
targetPort: 9200
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs-client
kubectl apply -f es.yaml
kb.yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kb
spec:
version: 8.3.3
http:
tls:
selfSignedCertificate:
disabled: true
service:
spec:
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
count: 1
elasticsearchRef:
name: es
kubectl apply -f kb.yaml
3、修改nginx.yaml文件,让log-pilot能够收到到nginx日志
nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
# 挂载nginx日志路径
- mountPath: /var/log/nginx
name: nginxlogs
env:
# 设置索引名称为nginx
- name: aliyun_logs_nginx
value: "/var/log/nginx/*.log"
volumes:
# 将挂载路径emptyDir
- name: nginxlogs
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
labels:
app: nginx-svc
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
四、创建admin账户,并验证是否生成索引
1、登录kibana,创建admin账户(log-pilot使用的是es中的admin账户,并没有创建)
默认用户:elastic,查看es密码
kubectl get secret es-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'
2、访问nginx服务,验证是否生成索引
五、创建索引生命周期,保留90天的索引
1、创建索引生命周期,配置保留90天内的索引
PUT _ilm/policy/my_policy
{
"policy": {
"phases": {
"delete": {
"actions": {
"delete" : { }
}
}
}
}
}
2、创建索引模板,索引生命周期关联索引模板
自此,索引就能保留90天后自动删除。
最后,如果要监控日志中相关错误信息做预警的话,可以将log-pilot收集到的日志写入logstash中,只需要修改log-pilot中的es连接配置,在logstash中配置filter,grok匹配日志信息,日志信息分拆成字段后写入es中,grafana导入es数据,配置预警规则,就可以实现日志监控预警。