1 zookeeper监控方式

采用JMX 进行监控,可获取到的指标项不够丰富。Zookeeper Exporter监控可获得的指标项亦不太够丰富。从3.6.0之后,Zookeeper自带的Monitor结合Prometheus、Grafana可绘制出丰富的监控图表项。下面主要介绍下 Zookeeper Monitor的方式。

1.1 安装配置zookeeper

1.1.1 安装zookeeper

tar -xf apache-zookeeper-3.6.3-bin.tar.gz -C /app/module/
mv /app/module/apache-zookeeper-3.6.3-bin/ /app/module/zookeeper

1.1.2 配置文件

cd /app/module/zookeeper/conf/
cp zoo_sample.cfg zoo.cfg

vim zoo.cfg
# metric Prometheus 监控配置,如果用的其它监控方案这里可以注释掉
metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
metricsProvider.httpHost=0.0.0.0
# metric Prometheus 监控端口,同上,如果用的其它监控方案这里可以注释掉
metricsProvider.httpPort=7000
#如果将此属性设置为true,则Prometheus.io将导出有关JVM的指标,默认值为true
metricsProvider.exportJvmInfo=true

配置文件默认已经有了,取消注释即可

​​Prometheus监控之​zookeeper_zookeeper

1.1.3 启动zookeeper

/app/module/zookeeper/bin/zkServer.sh start

1.1.4 检查metrics接⼝暴露指标

​​Prometheus监控之​zookeeper_ide_02

1.1.5 数据测试

/app/module/zookeeper/bin/zkCli.sh
create /sanguo "diaochan"
create /sanguo/shuguo "liubei"
get -s /sanguo

1.2 配置Prometheus

1、编辑Prometheus配置⽂件,将haproxy服务纳⼊监控
  - job_name: "zookeeper_monitor"
    metrics_path: "/metrics"
    static_configs:
    - targets: ["192.168.137.131:7000"]

2、重新加载Prometheus配置⽂件 
curl -X POST http://192.168.137.131:9090/-/reload

1.3 zookeeper告警规则文件

1.3.1 告警规则⽂件

vim /app/module/prometheus/rules/zookeeper_rules.yml
groups:
- name: zookeeper告警规则
  rules:
  - alert: ZooKeeper server is down
    expr:  sum(up{job="zookeeper_monitor"}) by(instance,job) == 0
    for: 1m
    labels:
      severity: critical
    annotations:
      summary: "Instance {{ $labels.instance }} ZooKeeper server is down"
      description: "{{ $labels.instance }} of job {{$labels.job}} ZooKeeper server is down: [{{ $value }}]."
  - alert: create too many znodes
    expr: znode_count > 1000000
    for: 1m
    labels:
      severity: warning
    annotations:
      summary: "Instance {{ $labels.instance }} create too many znodes"
      description: "{{ $labels.instance }} of job {{$labels.job}} create too many znodes: [{{ $value }}]."
  - alert: create too many connections
    expr: num_alive_connections > 50 # suppose we use the default maxClientCnxns: 60
    for: 1m
    labels:
      severity: warning
    annotations:
      summary: "Instance {{ $labels.instance }} create too many connections"
      description: "{{ $labels.instance }} of job {{$labels.job}} create too many connections: [{{ $value }}]."
  - alert: znode total occupied memory is too big
    expr: approximate_data_size /1024 /1024 > 1 * 1024 # more than 1024 MB(1 GB)
    for: 1m
    labels:
      severity: warning
    annotations:
      summary: "Instance {{ $labels.instance }} znode total occupied memory is too big"
      description: "{{ $labels.instance }} of job {{$labels.job}} znode total occupied memory is too big: [{{ $value }}] MB."
  - alert: set too many watch
    expr: watch_count > 10000
    for: 1m
    labels:
      severity: warning
    annotations:
      summary: "Instance {{ $labels.instance }} set too many watch"
      description: "{{ $labels.instance }} of job {{$labels.job}} set too many watch: [{{ $value }}]."
  - alert: a leader election happens
    expr: increase(election_time_count[5m]) > 0
    for: 1m
    labels:
      severity: warning
    annotations:
      summary: "Instance {{ $labels.instance }} a leader election happens"
      description: "{{ $labels.instance }} of job {{$labels.job}} a leader election happens: [{{ $value }}]."
  - alert: open too many files
    expr: open_file_descriptor_count > 300
    for: 1m
    labels:
      severity: warning
    annotations:
      summary: "Instance {{ $labels.instance }} open too many files"
      description: "{{ $labels.instance }} of job {{$labels.job}} open too many files: [{{ $value }}]."
  - alert: fsync time is too long
    expr: rate(fsynctime_sum[1m]) > 100
    for: 1m
    labels:
      severity: warning
    annotations:
      summary: "Instance {{ $labels.instance }} fsync time is too long"
      description: "{{ $labels.instance }} of job {{$labels.job}} fsync time is too long: [{{ $value }}]."
  - alert: take snapshot time is too long
    expr: rate(snapshottime_sum[5m]) > 100
    for: 1m
    labels:
      severity: warning
    annotations:
      summary: "Instance {{ $labels.instance }} take snapshot time is too long"
      description: "{{ $labels.instance }} of job {{$labels.job}} take snapshot time is too long: [{{ $value }}]."
  - alert: avg latency is too high
    expr: avg_latency > 100
    for: 1m
    labels:
      severity: warning
    annotations:
      summary: "Instance {{ $labels.instance }} avg latency is too high"
      description: "{{ $labels.instance }} of job {{$labels.job}} avg latency is too high: [{{ $value }}]."
  - alert: JvmMemoryFillingUp
    expr: jvm_memory_bytes_used / jvm_memory_bytes_max{area="heap"} > 0.8
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "JVM memory filling up (instance {{ $labels.instance }})"
      description: "JVM memory is filling up (> 80%)\n labels: {{ $labels }}  value = {{ $value }}."

1.3.2 检查rules语法

/app/module/prometheus/promtool check rules /app/module/prometheus/rules/zookeeper_rules.yml

1.3.3 重新加载Prometheus

curl -X POST http://192.168.137.131:9090/-/reload

1.3.4 验证告警规则

​​Prometheus监控之​zookeeper_ide_03

1.4 导入zookeeper图形

导入ID10465

​​Prometheus监控之​zookeeper_JVM_04