文章目录

  • 方案
  • 部署 FileBeat
  • helm部署
  • yaml 部署
  • 部署 Elastic
  • 安装 eck
  • 安装 elastic
  • 部署 Kibana


方案

  1. 部署 FileBeat 从集群每个节点采集日志
  2. FileBeat 发送日志到 ElasticSearch 并保存
  3. 部署 Kibana 展示 ElasticSearch 数据

采集

采集

采集

发送

发送

发送

展示

节点

FileBeat

节点

FileBeat

节点

FileBeat

Elastic

Kibana


 

部署 FileBeat

helm部署

官方文档: https://github.com/elastic/helm-charts/tree/master/filebeat

部署命令

helm repo add elastic https://helm.elastic.co
helm install fb elastic/filebeat -f values.yaml -n kube-system

当前最新为 8.5.1,但本文使用 7.17 版本

elk收集不全 elk怎么收集日志_elasticsearch

values.yaml 文件

配置参考文档:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-container.html

input.paths:模糊匹配需要收集的日志路径
input.fields:定义收集日志的 key
input.fields_under_root:设置 ture,fields存储在输出文档的顶级位置,参考文章 :filebeat输出结果到elasticsearch的多个索引 setup.template:自定义索引
setup.ilm.enabled:设置 falsel;若启用,则忽略 output.elasticsearch.index 的设置
output.elasticsearch.indices.index:设置 elastic 的索引
output.elasticsearch.indices.when.contains:对应 input 的 fields

filebeatConfig:
  filebeat.yml: |
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*dev_permission-service*.log
      fields:
        app_id: "dev-permission"
      fields_under_root: true
        
    - type: container
      paths:
        - /var/log/containers/*dev_converter-service*.log
      fields:
        app_id: "dev-converter"
      fields_under_root: true
      
    setup.template.enabled: true
    setup.template.fields: fields.yml
    setup.template.name: "k8s"
    setup.template.pattern: "k8s-*"
    setup.template.enabled: true
    setup.ilm.enabled: false

    output.elasticsearch:
      index: "k8s-other"
      username: 'elastic'
      password: 'xxxxx'
      protocol: http
      hosts: ["http://elsmy.saas.api.gd-njc.com:80"]
      indices:
      - index: "k8s-dev-permission-%{+yyyy.MM.dd}"
        when.contains:
          app_id: "dev-permission"
      - index: "k8s-dev-converter-%{+yyyy.MM.dd}"
        when.contains:
          app_id: "dev-converter"

 

yaml 部署

官方文档:https://www.elastic.co/guide/en/beats/filebeat/current/running-on-kubernetes.html

编辑 filebeat-kubernetes.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*dev_permission-service*.log
      fields:
        app_id: "dev-permission"
      fields_under_root: true
        
    - type: container
      paths:
        - /var/log/containers/*dev_converter-service*.log
      fields:
        app_id: "dev-converter"
      fields_under_root: true
      
    setup.template.enabled: true
    setup.template.fields: fields.yml
    setup.template.name: "k8s"
    setup.template.pattern: "k8s-*"
    setup.template.enabled: true
    setup.ilm.enabled: false

    output.elasticsearch:
      index: "k8s-other"
      username: 'elastic'
      password: 'xxxxx'
      protocol: http
      hosts: ["http://elsmy.saas.api.gd-njc.com:80"]
      indices:
      - index: "k8s-dev-permission-%{+yyyy.MM.dd}"
        when.contains:
          app_id: "dev-permission"
      - index: "k8s-dev-converter-%{+yyyy.MM.dd}"
        when.contains:
          app_id: "dev-converter"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:8.4.2
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0640
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          # When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: kube-system
roleRef:
  kind: Role
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat-kubeadm-config
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: kube-system
roleRef:
  kind: Role
  name: filebeat-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  - nodes
  verbs:
  - get
  - watch
  - list
- apiGroups: ["apps"]
  resources:
    - replicasets
  verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
  resources:
    - jobs
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat
  # should be the namespace where filebeat is running
  namespace: kube-system
  labels:
    k8s-app: filebeat
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat-kubeadm-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---

部署命令

kubectl apply -f filebeat-kubernetes.yaml

 

部署 Elastic

官网:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html

安装 eck

官网:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html

安装 crds 和 operator

kubectl create -f https://download.elastic.co/downloads/eck/2.5.0/crds.yaml
kubectl apply -f https://download.elastic.co/downloads/eck/2.5.0/operator.yaml

查看 operator 运行日志

kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

elk收集不全 elk怎么收集日志_elk_02


 

安装 elastic

部署命令

kubectl apply -f deployment.yaml

deployment.yaml 文件

namespace:配置命名空间
tls:关闭 tls 协议,配置文章:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-transport-settings.html
storageClassName:配置存储卷,可以参考文章 nfs-client storage:配置存储空间,预设 10Gi,配置文章:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html
resources:配置最大最小内存,配置文章:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-managing-compute-resources.html
ES_JAVA_OPTS:配置最大最小 java 内存,配置文章:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-jvm-heap-dumps.html

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch # 需要安装es operator
metadata:
  name: es
  namespace: els-test
spec:
  version: 7.11.2
  http:
    tls:
      selfSignedCertificate:
        disabled: true
  nodeSets:
    - name: es
      count: 3
      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 10Gi
            storageClassName: nfs-client
      podTemplate:
        spec:           
          containers:
            - name: elasticsearch
              env:
                - name: ES_JAVA_OPTS
                  value: "-Xms2g -Xmx2g"
              resources:
                requests:
                  memory: 3Gi
                limits:
                  memory: 3Gi

部署结果

elk收集不全 elk怎么收集日志_elk收集不全_03

解决报错

报错信息:

ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

解决报错

  1. 修改配置: vi /etc/sysctl.conf
  2. 添加配置: vm.max_map_count=655360
  3. 并=执行命令: sysctl -p
  4. 检查:sysctl -a | grep vm.max_map_count

登录
账号:elastic
密码:

kubectl  get secret es-es-elastic-user -n els-test -o=jsonpath='{.data.elastic}' | base64 --decode

elk收集不全 elk怎么收集日志_elk收集不全_04

配置 ingress 或 nodeport ,参考文章:ingress 部署

elk收集不全 elk怎么收集日志_elk收集不全_05

结果如下即 elastic 部署成功,cluster_name 是 es

elk收集不全 elk怎么收集日志_elastic_06

查看elastic集群索引信息:/_cat/indices?v

elk收集不全 elk怎么收集日志_kubernetes_07

 

部署 Kibana

部署命令

kubectl apply -f kibana.yaml

kibana.yaml

namespace:和 elastic 一样
tls:关闭 tls 协议
elasticsearchRef:和 elastic 的 cluster_name 一样
编写 deployment.yaml 文件

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
  namespace: els-test
spec:
  version: 7.11.2
  count: 1
  elasticsearchRef:
    name: es
  http:
    tls:
      selfSignedCertificate:
        disabled: true

部署结果

elk收集不全 elk怎么收集日志_elk收集不全_08

配置 ingress 或 nodeport ,参考文章:ingress 部署

elk收集不全 elk怎么收集日志_elk_09

配置索引正则: [manage space] -> [index patterns]

elk收集不全 elk怎么收集日志_elasticsearch_10


创建索引正则匹配

elk收集不全 elk怎么收集日志_elk收集不全_11

在 discover 可以选择配置好的正则

elk收集不全 elk怎么收集日志_elasticsearch_12

选择 message,只看日志消息

elk收集不全 elk怎么收集日志_elasticsearch_13