一、前言

在kube-prometheus项目中,默认已内置了我们集群节点资源的监控,在整个集群项目中,我们还需要对集群外部的资源进行监控,其中就包含了外部节点资源的基础资源,总不可能再搭建一套传统的Promehtues去专门集群外部的资源监控吧,这显然不是我们想要的。

二、实施流程

关于kube-prometheus监控集群外Linux节点资源,网上文档并不多,甚至是没有,我不知道是不重要,还是说没有这种需求,但既然我能想到这种需求,那就安排,只要理解了kube-prometheus ServiceMonitor监控原理一切都可盘!

2.1、部署node_exporter

安装服务器node资源工具,解压指定node_exporter工作目录

#tar zxvf node_exporter-1.0.1.linux-amd64.tar.gz -C /usr/local/
#cd  /usr/local/
#mv #node_exporter-1.0.1.linux-amd64 node_exporter

配置systemd启动方式

# vim  /usr/lib/systemd/system/node_exporter.service 
[Unit]
Description=node_exporter
After=network.target

[Service]
Type=simple
ExecStart=/usr/local/node_exporter/node_exporter

[Install]
WantedBy=multi-user.target

启动验证

#sytemctl daemon-reload
#systemctl start node_exporter

Kube-prometheus监控Linux(集群外)主机_node_exporter

2.2、编写Service Yaml

定义Service资源,去代理需要被监控的Node节点IP并暴露出来,允许k8s集群内部Promehtues资源能访问到node节点的Metrics接口,简单一句话,就是要用service去代理访问node节点的Metrics接口,从而实现监控的目的

#vim  nodeExporter-service.yaml 
apiVersion: v1
kind: Endpoints
metadata:
  labels:
    app.kubernetes.io/name: external-node-exporter    #endpoints 资源Labels标签
  name: external-node-exporter #endpoints 名称
  namespace: monitoring   #所属命名空间
subsets:  #用于指定一个或者多个IP,代表后端服务实例的地址
- addresses:
  - ip: x.x.x.x #指定node_exporter被控节点的IP,若监控多节点,那就直接往下指定IP即可
  ports:	#描述IP所使用的端口号信息
  - name: http-metrics
    port: 9100
    protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: external-node-exporter #同样是用于指定svc资源的标签Labels
  name: external-node-exporter #定义svc资源的名称
  namespace: monitoring   #定义所属命名空间
spec:
  ports: #定义了Svc所监控的端口以及关联的后端Pod端口信息
  - name: http-metrics   #定义了端口的名称
    port: 9100  #定义了svc所暴露出来的端口
    protocol: TCP  #定义了端口使用的协议
    targetPort: 9100  #定义了与svc所关联的Pod端口,该端口一般是Pod容器服务的内部端口
  sessionAffinity: None  #定义了会话亲和力类型,表示不启用亲和性,即新的请求可能会被路由到不同的后端Pod上,而不考虑之前请求的路由情况
  type: ClusterIP   #定义Svc访问类型,该类型只能在集群内部使用,适合作为后端服务的访问入口

部署Service Yaml

#kubectl apply -f nodeExporter-service.yaml

Kube-prometheus监控Linux(集群外)主机_node_exporter_02

测试Metrics 接口数据是否正常返回

Kube-prometheus监控Linux(集群外)主机_node_exporter_03

2.3、编写ServiceMonitor Yaml

前面通过Service ClusterIP正常访问到metrics监控数据,那么接下来就该考虑如何去自动发现监控数据的问题,这里我们定义一个ServiceMonitor,并关联到Service标签

#vim  nodeExporter-serviceMonitor.yaml 
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor # prometheus-operator 定义的CRD
metadata:
  name: external-node-exporter 
  namespace: monitoring
  labels:
    k8s-apps: external-node-exporter
spec:
  jobLabel: app.kubernetes.io/name 
  selector:
    matchLabels:
      app.kubernetes.io/name: external-node-exporter
  namespaceSelector:
    matchNames:  # 配置需要自动发现的命名空间,可以配置多个
    - monitoring
  endpoints:
  - port: http-metrics # 拉去metric的端口,这个写的是 service的端口名称,即 service yaml的spec.ports.name
    interval: 15s # 拉取metric的时间间隔

创建ServiceMonitor资源

#kubectl apply -f nodeExporter-serviceMonitor.yaml
#kubectl -n  monitoring  get servicemonitor -l k8s-apps=external-node-exporter

Kube-prometheus监控Linux(集群外)主机_node_exporter_04

2.4、验证Targets

访问Promehtues UI界面查验一下是否正常监控

Kube-prometheus监控Linux(集群外)主机_node_exporter_05

2.5、指定Prometheus rule规则

该规则主要是复制的kube-prometheus云原生已存在的node监控规则

Ps:需注意复用的规则应将监控目标修改成外部的node Job名称,上述job名称为external-node-exporter。那么该规则同样要指定该job名称,否则监控规则无法生效

[root@k8s-master01 external-nodeExporter]# vim  external-nodeExporter-prometheusRule.yaml 
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  labels:
    app.kubernetes.io/component: exporter
    app.kubernetes.io/name: external-node-exporter
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 1.3.1
    prometheus: k8s
    role: alert-rules
  name: external-node-exporter-rules
  namespace: monitoring
spec:
  groups:
  - name: external-node-exporter
    rules:
    - alert: NodeFilesystemSpaceFillingUp
      annotations:
        description: Filesystem on {{ $labels.device }} at {{ $labels.instance }}
          has only {{ printf "%.2f" $value }}% available space left and is filling
          up.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodefilesystemspacefillingup
        summary: Filesystem is predicted to run out of space within the next 24 hours.
      expr: |
        (
          node_filesystem_avail_bytes{job="external-node-exporter",fstype!=""} / node_filesystem_size_bytes{job="external-node-exporter",fstype!=""} * 100 < 20
        and
          predict_linear(node_filesystem_avail_bytes{job="external-node-exporter",fstype!=""}[6h], 24*60*60) < 0
        and
          node_filesystem_readonly{job="external-node-exporter",fstype!=""} == 0
        )
      for: 1h
      labels:
        severity: warning
    - alert: NodeFilesystemSpaceFillingUp
      annotations:
        description: Filesystem on {{ $labels.device }} at {{ $labels.instance }}
          has only {{ printf "%.2f" $value }}% available space left and is filling
          up fast.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodefilesystemspacefillingup
        summary: Filesystem is predicted to run out of space within the next 4 hours.
      expr: |
        (
          node_filesystem_avail_bytes{job="external-node-exporter",fstype!=""} / node_filesystem_size_bytes{job="external-node-exporter",fstype!=""} * 100 < 15
        and
          predict_linear(node_filesystem_avail_bytes{job="external-node-exporter",fstype!=""}[6h], 4*60*60) < 0
        and
          node_filesystem_readonly{job="external-node-exporter",fstype!=""} == 0
        )
      for: 1h
      labels:
        severity: critical
    - alert: NodeFilesystemAlmostOutOfSpace
      annotations:
        description: Filesystem on {{ $labels.device }} at {{ $labels.instance }}
          has only {{ printf "%.2f" $value }}% available space left.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodefilesystemalmostoutofspace
        summary: Filesystem has less than 5% space left.
      expr: |
        (
          node_filesystem_avail_bytes{job="external-node-exporter",fstype!=""} / node_filesystem_size_bytes{job="external-node-exporter",fstype!=""} * 100 < 5
        and
          node_filesystem_readonly{job="external-node-exporter",fstype!=""} == 0
        )
      for: 30m
      labels:
        severity: warning
    - alert: NodeFilesystemAlmostOutOfSpace
      annotations:
        description: Filesystem on {{ $labels.device }} at {{ $labels.instance }}
          has only {{ printf "%.2f" $value }}% available space left.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodefilesystemalmostoutofspace
        summary: Filesystem has less than 3% space left.
      expr: |
        (
          node_filesystem_avail_bytes{job="external-node-exporter",fstype!=""} / node_filesystem_size_bytes{job="external-node-exporter",fstype!=""} * 100 < 3
        and
          node_filesystem_readonly{job="external-node-exporter",fstype!=""} == 0
        )
      for: 30m
      labels:
        severity: critical
    - alert: NodeFilesystemFilesFillingUp
      annotations:
        description: Filesystem on {{ $labels.device }} at {{ $labels.instance }}
          has only {{ printf "%.2f" $value }}% available inodes left and is filling
          up.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodefilesystemfilesfillingup
        summary: Filesystem is predicted to run out of inodes within the next 24 hours.
      expr: |
        (
          node_filesystem_files_free{job="external-node-exporter",fstype!=""} / node_filesystem_files{job="external-node-exporter",fstype!=""} * 100 < 40
        and
          predict_linear(node_filesystem_files_free{job="external-node-exporter",fstype!=""}[6h], 24*60*60) < 0
        and
          node_filesystem_readonly{job="external-node-exporter",fstype!=""} == 0
        )
      for: 1h
      labels:
        severity: warning
    - alert: NodeFilesystemFilesFillingUp
      annotations:
        description: Filesystem on {{ $labels.device }} at {{ $labels.instance }}
          has only {{ printf "%.2f" $value }}% available inodes left and is filling
          up fast.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodefilesystemfilesfillingup
        summary: Filesystem is predicted to run out of inodes within the next 4 hours.
      expr: |
        (
          node_filesystem_files_free{job="external-node-exporter",fstype!=""} / node_filesystem_files{job="external-node-exporter",fstype!=""} * 100 < 20
        and
          predict_linear(node_filesystem_files_free{job="external-node-exporter",fstype!=""}[6h], 4*60*60) < 0
        and
          node_filesystem_readonly{job="external-node-exporter",fstype!=""} == 0
        )
      for: 1h
      labels:
        severity: critical
    - alert: NodeFilesystemAlmostOutOfFiles
      annotations:
        description: Filesystem on {{ $labels.device }} at {{ $labels.instance }}
          has only {{ printf "%.2f" $value }}% available inodes left.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodefilesystemalmostoutoffiles
        summary: Filesystem has less than 5% inodes left.
      expr: |
        (
          node_filesystem_files_free{job="external-node-exporter",fstype!=""} / node_filesystem_files{job="external-node-exporter",fstype!=""} * 100 < 5
        and
          node_filesystem_readonly{job="external-node-exporter",fstype!=""} == 0
        )
      for: 1h
      labels:
        severity: warning
    - alert: NodeFilesystemAlmostOutOfFiles
      annotations:
        description: Filesystem on {{ $labels.device }} at {{ $labels.instance }}
          has only {{ printf "%.2f" $value }}% available inodes left.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodefilesystemalmostoutoffiles
        summary: Filesystem has less than 3% inodes left.
      expr: |
        (
          node_filesystem_files_free{job="external-node-exporter",fstype!=""} / node_filesystem_files{job="external-node-exporter",fstype!=""} * 100 < 3
        and
          node_filesystem_readonly{job="external-node-exporter",fstype!=""} == 0
        )
      for: 1h
      labels:
        severity: critical
    - alert: NodeNetworkReceiveErrs
      annotations:
        description: '{{ $labels.instance }} interface {{ $labels.device }} has encountered
          {{ printf "%.0f" $value }} receive errors in the last two minutes.'
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodenetworkreceiveerrs
        summary: Network interface is reporting many receive errors.
      expr: |
        rate(node_network_receive_errs_total[2m]) / rate(node_network_receive_packets_total[2m]) > 0.01
      for: 1h
      labels:
        severity: warning
    - alert: NodeNetworkTransmitErrs   #用于监控节点的网络接口是否出现了发送错误的情况
      annotations:
        description: '{{ $labels.instance }} interface {{ $labels.device }} has encountered
          {{ printf "%.0f" $value }} transmit errors in the last two minutes.' 描述近两分钟内,具体实例上的网络接口设备出现一定数量的发送错误
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodenetworktransmiterrs
        summary: Network interface is reporting many transmit errors. #网络接口出现了很多发送错误
      expr: |
        rate(node_network_transmit_errs_total[2m]) / rate(node_network_transmit_packets_total[2m]) > 0.01
      for: 5m
      labels:
        severity: warning
    - alert: NodeHighNumberConntrackEntriesUsed  #用于监控节点上连接个跟踪(conntrack)条目使用情况
      annotations:
        description: '{{ $value | humanizePercentage }} of conntrack entries are used.' #描述连接跟踪条目使用量占总数百分比
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodehighnumberconntrackentriesused
        summary: Number of conntrack are getting close to the limit. #链接跟踪条目接近上限
      expr: |
        (node_nf_conntrack_entries / node_nf_conntrack_entries_limit) > 0.75
      labels:
        severity: warning
    - alert: NodeTextFileCollectorScrapeError
      annotations:
        description: Node Exporter text file collector failed to scrape.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodetextfilecollectorscrapeerror
        summary: Node Exporter text file collector failed to scrape.
      expr: |
        node_textfile_scrape_error{job="external-node-exporter"} == 1
      labels:
        severity: warning
    - alert: NodeClockSkewDetected
      annotations:
        description: Clock on {{ $labels.instance }} is out of sync by more than 300s.
          Ensure NTP is configured correctly on this host.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodeclockskewdetected
        summary: Clock skew detected.
      expr: |
        (
          node_timex_offset_seconds > 0.05
        and
          deriv(node_timex_offset_seconds[5m]) >= 0
        )
        or
        (
          node_timex_offset_seconds < -0.05
        and
          deriv(node_timex_offset_seconds[5m]) <= 0
        )
      for: 10m
      labels:
        severity: warning
    - alert: NodeClockNotSynchronising
      annotations:
        description: Clock on {{ $labels.instance }} is not synchronising. Ensure
          NTP is configured on this host.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodeclocknotsynchronising
        summary: Clock not synchronising.
      expr: |
        min_over_time(node_timex_sync_status[5m]) == 0
        and
        node_timex_maxerror_seconds >= 16
      for: 10m
      labels:
        severity: warning
    - alert: NodeRAIDDegraded
      annotations:
        description: RAID array '{{ $labels.device }}' on {{ $labels.instance }} is
          in degraded state due to one or more disks failures. Number of spare drives
          is insufficient to fix issue automatically.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/noderaiddegraded
        summary: RAID Array is degraded
      expr: |
        node_md_disks_required - ignoring (state) (node_md_disks{state="active"}) > 0
      for: 15m
      labels:
        severity: critical
    - alert: NodeRAIDDiskFailure
      annotations:
        description: At least one device in RAID array on {{ $labels.instance }} failed.
          Array '{{ $labels.device }}' needs attention and possibly a disk swap.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/noderaiddiskfailure
        summary: Failed device in RAID array
      expr: |
        node_md_disks{state="failed"} > 0
      labels:
        severity: warning
    - alert: NodeFileDescriptorLimit
      annotations:
        description: File descriptors limit at {{ $labels.instance }} is currently
          at {{ printf "%.2f" $value }}%.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodefiledescriptorlimit
        summary: Kernel is predicted to exhaust file descriptors limit soon.
      expr: |
        (
          node_filefd_allocated{job="external-node-exporter"} * 100 / node_filefd_maximum{job="external-node-exporter"} > 70
        )
      for: 15m
      labels:
        severity: warning
    - alert: NodeFileDescriptorLimit  #监控节点文件描述符显示使用情况
      annotations:
        description: File descriptors limit at {{ $labels.instance }} is currently 提供了节点文件描述限制使用情况
          at {{ printf "%.2f" $value }}%.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodefiledescriptorlimit
        summary: Kernel is predicted to exhaust file descriptors limit soon. #内核预计很快耗尽文件描述符限制
      expr: |
        (
          node_filefd_allocated{job="external-node-exporter"} * 100 / node_filefd_maximum{job="external-node-exporter"} > 90
        )  #表示节点文件描述符已分配的最大数量的百分比超90%时触发告警
      for: 5m #满足条件五分钟之后才触发
      labels:
        severity: critical
  - name: external-node-exporter.rules  #主要定义了监控系统配置文件
    rules:
    - expr: |
        count without (cpu, mode) (
          node_cpu_seconds_total{job="external-node-exporter",mode="idle"}
        )
      record: instance:node_num_cpu:sum
    - expr: |
        1 - avg without (cpu) (
          sum without (mode) (rate(node_cpu_seconds_total{job="external-node-exporter", mode=~"idle|iowait|steal"}[5m]))
        )
      record: instance:node_cpu_utilisation:rate5m
    - expr: |
        (
          node_load1{job="external-node-exporter"}
        /
          instance:node_num_cpu:sum{job="external-node-exporter"}
        )
      record: instance:node_load1_per_cpu:ratio
    - expr: |
        1 - (
          (
            node_memory_MemAvailable_bytes{job="external-node-exporter"}
            or
            (
              node_memory_Buffers_bytes{job="external-node-exporter"}
              +
              node_memory_Cached_bytes{job="external-node-exporter"}
              +
              node_memory_MemFree_bytes{job="external-node-exporter"}
              +
              node_memory_Slab_bytes{job="external-node-exporter"}
            )
          )
        /
          node_memory_MemTotal_bytes{job="external-node-exporter"}
        )
      record: instance:node_memory_utilisation:ratio
    - expr: |
        rate(node_vmstat_pgmajfault{job="external-node-exporter"}[5m])
      record: instance:node_vmstat_pgmajfault:rate5m
    - expr: |
        rate(node_disk_io_time_seconds_total{job="external-node-exporter", device=~"mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+"}[5m])
      record: instance_device:node_disk_io_time_seconds:rate5m
    - expr: |
        rate(node_disk_io_time_weighted_seconds_total{job="external-node-exporter", device=~"mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+"}[5m])
      record: instance_device:node_disk_io_time_weighted_seconds:rate5m
    - expr: |
        sum without (device) (
          rate(node_network_receive_bytes_total{job="external-node-exporter", device!="lo"}[5m])
        )
      record: instance:node_network_receive_bytes_excluding_lo:rate5m
    - expr: |
        sum without (device) (
          rate(node_network_transmit_bytes_total{job="external-node-exporter", device!="lo"}[5m])
        )
      record: instance:node_network_transmit_bytes_excluding_lo:rate5m
    - expr: |
        sum without (device) (
          rate(node_network_receive_drop_total{job="external-node-exporter", device!="lo"}[5m])
        )
      record: instance:node_network_receive_drop_excluding_lo:rate5m
    - expr: |
        sum without (device) (
          rate(node_network_transmit_drop_total{job="external-node-exporter", device!="lo"}[5m])
        )
      record: instance:node_network_transmit_drop_excluding_lo:rate5m

2.6、Grafana大屏展示

这里用的监控模版仍然是官网的Node 模版ID1860,监控的节点信息还算比较全面,大家可以选择性使用,具体导入方法,这里不再赘述。

Kube-prometheus监控Linux(集群外)主机_node_exporter_06

三、疑难杂症问题

在上述实施过程中,也遇到了阻碍性的问题,Prometheus监控告警触发的逻辑无非就是监控Targets状态异常,就会根据alerts策略触发告警,该策略可通过Alertmanager面板查看,并且是同步的。新增一个node节点监控目标模拟告警触发,尝试停掉目标node_exporter之后模拟触发,随后再次启动node_exporter服务,此时Prometheus界面可以看到。Targets恢复正常且并未触发告警策略,但Alertmanager面板却仍然是告警状态...同时间歇性的收到告警信息。

Targets监控目标正常

Kube-prometheus监控Linux(集群外)主机_node_exporter_07

targetDown告警策略并未触发,如下图所示

Kube-prometheus监控Linux(集群外)主机_node_exporter_08

而回过头再来看Alertmanager面板

Kube-prometheus监控Linux(集群外)主机_node_exporter_09

同时也会受到告警。为啥监控目标已恢复正常仍然还会收到告警呢.....怪事了....

Kube-prometheus监控Linux(集群外)主机_node_exporter_10

经过反复思索后,我能想到的就是网络延迟问题,目标已是正常,但仍出现断断续续的告警通知,结合我当下的环境分析,很大可能是k8s集群内部与外部节点的 Metrics接口延迟导致的,另外k8s集群与外部主机的是不同网段,也会导致该问题的出现,为了验证这一猜想,我尝试将目标换成同网段的节点进行,果然不出我所料,没有出现上述问题了