对于企业来说,日志的重要性不言而喻,日志收集分析展示平台的选择,这里给出几点选择ELK的理由。ELK是一套非常成熟的系统,她本身的构架非常适合Kubernetes集群,这里官网当中也是选用的 Elasticsearch作为Sample的,GitHub上下载的kubernetes二进制包中本身就有这个.yaml文件,所以使用ELK作为收集日志的理由相当充分。 


对于任何基础设施或后端服务系统,日志都是极其重要的。对于受Google内部容器管理系统Borg启发而催生出的Kubernetes项目来说,自然少不了对Logging的支持。在“ Logging Overview “中,官方概要介绍了Kubernetes上的几个层次的Logging方案,并给出Cluster-level logging的参考架构: 


Fluentd+Elasticsearch+kibana 日志收集部署实战_java


Kubernetes还给出了参考实现: 


– Logging Backend: Elastic Search stack(包括: Kibana ) 

– Logging-agent: fluentd 


01

介绍


1. Fluentd是一个开源收集事件和日志系统,用与各node节点日志数据的收集、处理等等。


2. ElasticSearch是一个开源的,基于Lucene的搜索服务器。它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。


3. Kibana是一个开源的用于数据可视化的web UI工具,可使用它对日志进行高效的搜索、可视化、分析等各种操作。


02

流程


每个node节点上面的fluentd监控并收集该节点上面的系统日志,并将处理过后的日志信息发送给ElasticSearch,Elasticsearch汇总各个node节点的日志信息,最后结合Kibana 实现web UI界面的数据展示。


03

安装实现


1.确保K8S集群正常工作(当然这是必须的....)


2.fluentd.yaml文件编写,这里要实现每个节点都能有fluentd跑起来,只需要将kind设置为DaemonSet即可。


apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  name: fluentd-elasticsearch

  namespace: kube-system

  labels:

    k8s-app: fluentd-logging

spec:

  template:

    metadata:

      labels:

        name: fluentd-elasticsearch

    spec:

      containers:

      - name: fluentd-elasticsearch

        image: gcr.io/google-containers/fluentd-elasticsearch:1.20

        resources:

          limits:

            memory: 200Mi

          requests:

            cpu: 100m

            memory: 200Mi

        volumeMounts:

        - name: varlog

          mountPath: /var/log

        - name: varlibdockercontainers

          mountPath: /var/lib/docker/containers

          readOnly: true

      terminationGracePeriodSeconds: 30

      volumes:

      - name: varlog

        hostPath:

          path: /var/log

      - name: varlibdockercontainers

        hostPath:

          path: /var/lib/docker/containers


3.elasticsearch-rc.yaml&elasticsearch-svc.yaml


apiVersion: v1

kind: ReplicationController

metadata:

  name: elasticsearch-logging-v1

  namespace: kube-system

  labels:

    k8s-app: elasticsearch-logging

    version: v1

    kubernetes.io/cluster-service: "true"

spec:

  replicas: 2

  selector:

    k8s-app: elasticsearch-logging

    version: v1

  template:

    metadata:

      labels:

        k8s-app: elasticsearch-logging

        version: v1

        kubernetes.io/cluster-service: "true"

    spec:

      containers:

      - image: gcr.io/google-containers/elasticsearch:v2.4.1

        name: elasticsearch-logging

        resources:

          # need more cpu upon initialization, therefore burstable class

          limits:

            cpu: 1000m

          requests:

            cpu: 100m

        ports:

        - containerPort: 9200

          name: db

          protocol: TCP

        - containerPort: 9300

          name: transport

          protocol: TCP

        volumeMounts:

        - name: es-persistent-storage

          mountPath: /data

      volumes:

      - name: es-persistent-storage

        emptyDir: {}


apiVersion: v1

kind: Service

metadata:

  name: elasticsearch-logging

  namespace: kube-system

  labels:

    k8s-app: elasticsearch-logging

    kubernetes.io/cluster-service: "true"

    kubernetes.io/name: "Elasticsearch"

spec:

  ports:

  - port: 9200

    protocol: TCP

    targetPort: db

  selector:

    k8s-app: elasticsearch-logging


4.kibana-rc.yaml&kibana-svc.yaml



apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: kibana-logging

  namespace: kube-system

  labels:

    k8s-app: kibana-logging

    kubernetes.io/cluster-service: "true"

spec:

  replicas: 1

  selector:

    matchLabels:

      k8s-app: kibana-logging

  template:

    metadata:

      labels:

        k8s-app: kibana-logging

    spec:

      containers:

      - name: kibana-logging

        image: gcr.io/google-containers/kibana:v4.6.1

        resources:

          # keep request = limit to keep this container in guaranteed class

          limits:

            cpu: 100m

          requests:

            cpu: 100m

        env:

          - name: "ELASTICSEARCH_URL"

            value: "http://elasticsearch-logging:9200"

          - name: "KIBANA_BASE_URL"

            value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging"

        ports:

        - containerPort: 5601

          name: ui

          protocol: TCP



apiVersion: v1

kind: Service

metadata:

  name: kibana-logging

  namespace: kube-system

  labels:

    k8s-app: kibana-logging

    kubernetes.io/cluster-service: "true"

    kubernetes.io/name: "Kibana"

spec:

  ports:

  - port: 5601

    protocol: TCP

    targetPort: ui

  selector:

    k8s-app: kibana-logging


5.kubectl create -f ****** ,这里可自由发挥。