原文地址https://awesome-prometheus-alerts.grep.to/rules
# 1.1. Prometheus self-monitoring (26 rules)[copy section]
# 1.1.1. Prometheus job missing
- A Prometheus job has disappeared[copy]
# 1.1.2. Prometheus target missing
- A Prometheus target has disappeared. An exporter might be crashed.[copy]
# 1.1.3. Prometheus all targets missing
- A Prometheus job does not have living target anymore.[copy]
# 1.1.4. Prometheus configuration reload failure
- Prometheus configuration reload error[copy]
# 1.1.5. Prometheus too many restarts
- Prometheus has restarted more than twice in the last 15 minutes. It might be crashlooping.[copy]
# 1.1.6. Prometheus AlertManager job missing
- A Prometheus AlertManager job has disappeared[copy]
# 1.1.7. Prometheus AlertManager configuration reload failure
- AlertManager configuration reload error[copy]
# 1.1.8. Prometheus AlertManager config not synced
- Configurations of AlertManager cluster instances are out of sync[copy]
# 1.1.9. Prometheus AlertManager E2E dead man switch
- Prometheus DeadManSwitch is an always-firing alert. It's used as an end-to-end test of Prometheus through the Alertmanager.[copy]
# 1.1.10. Prometheus not connected to alertmanager
- Prometheus cannot connect the alertmanager[copy]
# 1.1.11. Prometheus rule evaluation failures
- Prometheus encountered {{ $value }} rule evaluation failures, leading to potentially ignored alerts.[copy]
# 1.1.12. Prometheus template text expansion failures
- Prometheus encountered {{ $value }} template text expansion failures[copy]
# 1.1.13. Prometheus rule evaluation slow
- Prometheus rule evaluation took more time than the scheduled interval. It indicates a slower storage backend access or too complex query.[copy]
# 1.1.14. Prometheus notifications backlog
- The Prometheus notification queue has not been empty for 10 minutes[copy]
# 1.1.15. Prometheus AlertManager notification failing
- Alertmanager is failing sending notifications[copy]
# 1.1.16. Prometheus target empty
- Prometheus has no target in service discovery[copy]
# 1.1.17. Prometheus target scraping slow
- Prometheus is scraping exporters slowly since it exceeded the requested interval time. Your Prometheus server is under-provisioned.[copy]
# 1.1.18. Prometheus large scrape
- Prometheus has many scrapes that exceed the sample limit[copy]
# 1.1.19. Prometheus target scrape duplicate
- Prometheus has many samples rejected due to duplicate timestamps but different values[copy]
# 1.1.20. Prometheus TSDB checkpoint creation failures
- Prometheus encountered {{ $value }} checkpoint creation failures[copy]
# 1.1.21. Prometheus TSDB checkpoint deletion failures
- Prometheus encountered {{ $value }} checkpoint deletion failures[copy]
# 1.1.22. Prometheus TSDB compactions failed
- Prometheus encountered {{ $value }} TSDB compactions failures[copy]
# 1.1.23. Prometheus TSDB head truncations failed
- Prometheus encountered {{ $value }} TSDB head truncation failures[copy]
# 1.1.24. Prometheus TSDB reload failures
- Prometheus encountered {{ $value }} TSDB reload failures[copy]
# 1.1.25. Prometheus TSDB WAL corruptions
- Prometheus encountered {{ $value }} TSDB WAL corruptions[copy]
# 1.1.26. Prometheus TSDB WAL truncations failed
- Prometheus encountered {{ $value }} TSDB WAL truncation failures[copy]
# 1.2. Host and hardware : node-exporter (34 rules)[copy section]
# 1.2.1. Host out of memory
- Node memory is filling up (< 10% left)[copy]
# 1.2.2. Host memory under memory pressure
- The node is under heavy memory pressure. High rate of major page faults[copy]
# 1.2.3. Host unusual network throughput in
- Host network interfaces are probably receiving too much data (> 100 MB/s)[copy]
# 1.2.4. Host unusual network throughput out
- Host network interfaces are probably sending too much data (> 100 MB/s)[copy]
# 1.2.5. Host unusual disk read rate
- Disk is probably reading too much data (> 50 MB/s)[copy]
# 1.2.6. Host unusual disk write rate
- Disk is probably writing too much data (> 50 MB/s)[copy]
# 1.2.7. Host out of disk space
- Disk is almost full (< 10% left)[copy]
# 1.2.8. Host disk will fill in 24 hours
- Filesystem is predicted to run out of space within the next 24 hours at current write rate[copy]
# 1.2.9. Host out of inodes
- Disk is almost running out of available inodes (< 10% left)[copy]
# 1.2.10. Host inodes will fill in 24 hours
- Filesystem is predicted to run out of inodes within the next 24 hours at current write rate[copy]
# 1.2.11. Host unusual disk read latency
- Disk latency is growing (read operations > 100ms)[copy]
# 1.2.12. Host unusual disk write latency
- Disk latency is growing (write operations > 100ms)[copy]
# 1.2.13. Host high CPU load
- CPU load is > 80%[copy]
# 1.2.14. Host CPU steal noisy neighbor
- CPU steal is > 10%. A noisy neighbor is killing VM performances or a spot instance may be out of credit.[copy]
# 1.2.15. Host CPU high iowait
- CPU iowait > 5%. A high iowait means that you are disk or network bound.[copy]
# 1.2.16. Host context switching
- Context switching is growing on node (> 1000 / s)[copy]
# 1.2.17. Host swap is filling up
- Swap is filling up (>80%)[copy]
# 1.2.18. Host systemd service crashed
- systemd service crashed[copy]
# 1.2.19. Host physical component too hot
- Physical hardware component too hot[copy]
# 1.2.20. Host node overtemperature alarm
- Physical node temperature alarm triggered[copy]
# 1.2.21. Host RAID array got inactive
- RAID array {{ $labels.device }} is in degraded state due to one or more disks failures. Number of spare drives is insufficient to fix issue automatically.[copy]
# 1.2.22. Host RAID disk failure
- At least one device in RAID array on {{ $labels.instance }} failed. Array {{ $labels.md_device }} needs attention and possibly a disk swap[copy]
# 1.2.23. Host kernel version deviations
- Different kernel versions are running[copy]
# 1.2.24. Host OOM kill detected
- OOM kill detected[copy]
# 1.2.25. Host EDAC Correctable Errors detected
- Host {{ $labels.instance }} has had {{ printf "%.0f" $value }} correctable memory errors reported by EDAC in the last 5 minutes.[copy]
# 1.2.26. Host EDAC Uncorrectable Errors detected
- Host {{ $labels.instance }} has had {{ printf "%.0f" $value }} uncorrectable memory errors reported by EDAC in the last 5 minutes.[copy]
# 1.2.27. Host Network Receive Errors
- Host {{ $labels.instance }} interface {{ $labels.device }} has encountered {{ printf "%.0f" $value }} receive errors in the last two minutes.[copy]
# 1.2.28. Host Network Transmit Errors
- Host {{ $labels.instance }} interface {{ $labels.device }} has encountered {{ printf "%.0f" $value }} transmit errors in the last two minutes.[copy]
# 1.2.29. Host Network Interface Saturated
- The network interface "{{ $labels.device }}" on "{{ $labels.instance }}" is getting overloaded.[copy]
# 1.2.30. Host Network Bond Degraded
- Bond "{{ $labels.device }}" degraded on "{{ $labels.instance }}".[copy]
# 1.2.31. Host conntrack limit
- The number of conntrack is approaching limit[copy]
# 1.2.32. Host clock skew
- Clock skew detected. Clock is out of sync. Ensure NTP is configured correctly on this host.[copy]
# 1.2.33. Host clock not synchronising
- Clock not synchronising. Ensure NTP is configured on this host.[copy]
# 1.2.34. Host requires reboot
- {{ $labels.instance }} requires a reboot.[copy]
# 1.3. Docker containers : google/cAdvisor (6 rules)[copy section]
# 1.3.1. Container killed
- A container has disappeared[copy]
# 1.3.2. Container absent
- A container is absent for 5 min[copy]
# 1.3.3. Container CPU usage
- Container CPU usage is above 80%[copy]
# 1.3.4. Container Memory usage
- Container Memory usage is above 80%[copy]
# 1.3.5. Container Volume usage
- Container Volume usage is above 80%[copy]
# 1.3.6. Container high throttle rate
- Container is being throttled[copy]
# 1.4. Blackbox : prometheus/blackbox_exporter (9 rules)[copy section]
# 1.4.1. Blackbox probe failed
- Probe failed[copy]
# 1.4.2. Blackbox configuration reload failure
- Blackbox configuration reload failure[copy]
# 1.4.3. Blackbox slow probe
- Blackbox probe took more than 1s to complete[copy]
# 1.4.4. Blackbox probe HTTP failure
- HTTP status code is not 200-399[copy]
# 1.4.5. Blackbox SSL certificate will expire soon
- SSL certificate expires in 30 days[copy]
# 1.4.6. Blackbox SSL certificate will expire soon
- SSL certificate expires in 3 days[copy]
# 1.4.7. Blackbox SSL certificate expired
- SSL certificate has expired already[copy]
# 1.4.8. Blackbox probe slow HTTP
- HTTP request took more than 1s[copy]
# 1.4.9. Blackbox probe slow ping
- Blackbox ping took more than 1s[copy]
# 1.5. Windows Server : prometheus-community/windows_exporter (5 rules)[copy section]
# 1.5.1. Windows Server collector Error
- Collector {{ $labels.collector }} was not successful[copy]
# 1.5.2. Windows Server service Status
- Windows Service state is not OK[copy]
# 1.5.3. Windows Server CPU Usage
- CPU Usage is more than 80%[copy]
# 1.5.4. Windows Server memory Usage
- Memory usage is more than 90%[copy]
# 1.5.5. Windows Server disk Space Usage
- Disk usage is more than 80%[copy]
# 1.6. VMware : pryorda/vmware_exporter (4 rules)[copy section]
# 1.6.1. Virtual Machine Memory Warning
- High memory usage on {{ $labels.instance }}: {{ $value | printf "%.2f"}}%[copy]
# 1.6.2. Virtual Machine Memory Critical
- High memory usage on {{ $labels.instance }}: {{ $value | printf "%.2f"}}%[copy]
# 1.6.3. High Number of Snapshots
- High snapshots number on {{ $labels.instance }}: {{ $value }}[copy]
# 1.6.4. Outdated Snapshots
- Outdated snapshots on {{ $labels.instance }}: {{ $value | printf "%.0f"}} days[copy]
# 1.7. Netdata : Embedded exporter (9 rules)[copy section]
# 1.7.1. Netdata high cpu usage
- Netdata high CPU usage (> 80%)[copy]
# 1.7.2. Host CPU steal noisy neighbor
- CPU steal is > 10%. A noisy neighbor is killing VM performances or a spot instance may be out of credit.[copy]
# 1.7.3. Netdata high memory usage
- Netdata high memory usage (> 80%)[copy]
# 1.7.4. Netdata low disk space
- Netdata low disk space (> 80%)[copy]
# 1.7.5. Netdata predicted disk full
- Netdata predicted disk full in 24 hours[copy]
# 1.7.6. Netdata MD mismatch cnt unsynchronized blocks
- RAID Array have unsynchronized blocks[copy]
# 1.7.7. Netdata disk reallocated sectors
- Reallocated sectors on disk[copy]
# 1.7.8. Netdata disk current pending sector
- Disk current pending sector[copy]
# 1.7.9. Netdata reported uncorrectable disk sectors
- Reported uncorrectable disk sectors[copy]
# 2.1. MySQL : prometheus/mysqld_exporter (9 rules)[copy section]
# 2.1.1. MySQL down
- MySQL instance is down on {{ $labels.instance }}[copy]
# 2.1.2. MySQL too many connections (> 80%)
- More than 80% of MySQL connections are in use on {{ $labels.instance }}[copy]
# 2.1.3. MySQL high threads running
- More than 60% of MySQL connections are in running state on {{ $labels.instance }}[copy]
# 2.1.4. MySQL Slave IO thread not running
- MySQL Slave IO thread not running on {{ $labels.instance }}[copy]
# 2.1.5. MySQL Slave SQL thread not running
- MySQL Slave SQL thread not running on {{ $labels.instance }}[copy]
# 2.1.6. MySQL Slave replication lag
- MySQL replication lag on {{ $labels.instance }}[copy]
# 2.1.7. MySQL slow queries
- MySQL server mysql has some new slow query.[copy]
# 2.1.8. MySQL InnoDB log waits
- MySQL innodb log writes stalling[copy]
# 2.1.9. MySQL restarted
- MySQL has just been restarted, less than one minute ago on {{ $labels.instance }}.[copy]
# 2.2. PostgreSQL : prometheus-community/postgres_exporter (23 rules)[copy section]
# 2.2.1. Postgresql down
- Postgresql instance is down[copy]
# 2.2.2. Postgresql restarted
- Postgresql restarted[copy]
# 2.2.3. Postgresql exporter error
- Postgresql exporter is showing errors. A query may be buggy in query.yaml[copy]
# 2.2.4. Postgresql replication lag
- PostgreSQL replication lag is going up (> 30s)[copy]
# 2.2.5. Postgresql table not auto vacuumed
- Table {{ $labels.relname }} has not been auto vacuumed for 10 days[copy]
# 2.2.6. Postgresql table not auto analyzed
- Table {{ $labels.relname }} has not been auto analyzed for 10 days[copy]
# 2.2.7. Postgresql too many connections
- PostgreSQL instance has too many connections (> 80%).[copy]
# 2.2.8. Postgresql not enough connections
- PostgreSQL instance should have more connections (> 5)[copy]
# 2.2.9. Postgresql dead locks
- PostgreSQL has dead-locks[copy]
# 2.2.10. Postgresql high rollback rate
- Ratio of transactions being aborted compared to committed is > 2 %[copy]
# 2.2.11. Postgresql commit rate low
- Postgresql seems to be processing very few transactions[copy]
# 2.2.12. Postgresql low XID consumption
- Postgresql seems to be consuming transaction IDs very slowly[copy]
# 2.2.13. Postgresql high rate statement timeout
- Postgres transactions showing high rate of statement timeouts[copy]
# 2.2.14. Postgresql high rate deadlock
- Postgres detected deadlocks[copy]
# 2.2.15. Postgresql unused replication slot
- Unused Replication Slots[copy]
# 2.2.16. Postgresql too many dead tuples
- PostgreSQL dead tuples is too large[copy]
# 2.2.17. Postgresql split brain
- Split Brain, too many primary Postgresql databases in read-write mode[copy]
# 2.2.18. Postgresql promoted node
- Postgresql standby server has been promoted as primary node[copy]
# 2.2.19. Postgresql configuration changed
- Postgres Database configuration change has occurred[copy]
# 2.2.20. Postgresql SSL compression active
- Database connections with SSL compression enabled. This may add significant jitter in replication delay. Replicas should turn off SSL compression via `sslcompression=0` in `recovery.conf`.[copy]
# 2.2.21. Postgresql too many locks acquired
- Too many locks acquired on the database. If this alert happens frequently, we may need to increase the postgres setting max_locks_per_transaction.[copy]
# 2.2.22. Postgresql bloat index high (> 80%)
- The index {{ $labels.idxname }} is bloated. You should execute `REINDEX INDEX CONCURRENTLY {{ $labels.idxname }};`[copy]
# 2.2.23. Postgresql bloat table high (> 80%)
- The table {{ $labels.relname }} is bloated. You should execute `VACUUM {{ $labels.relname }};`[copy]
# 2.3. SQL Server : Ozarklake/prometheus-mssql-exporter (2 rules)[copy section]
# 2.3.1. SQL Server down
- SQL server instance is down[copy]
# 2.3.2. SQL Server deadlock
- SQL Server is having some deadlock.[copy]
# 2.4. PGBouncer : spreaker/prometheus-pgbouncer-exporter (3 rules)[copy section]
# 2.4.1. PGBouncer active connections
- PGBouncer pools are filling up[copy]
# 2.4.2. PGBouncer errors
- PGBouncer is logging errors. This may be due to a a server restart or an admin typing commands at the pgbouncer console.[copy]
# 2.4.3. PGBouncer max connections
- The number of PGBouncer client connections has reached max_client_conn.[copy]
# 2.5. Redis : oliver006/redis_exporter (12 rules)[copy section]
# 2.5.1. Redis down
- Redis instance is down[copy]
# 2.5.2. Redis missing master
- Redis cluster has no node marked as master.[copy]
# 2.5.3. Redis too many masters
- Redis cluster has too many nodes marked as master.[copy]
# 2.5.4. Redis disconnected slaves
- Redis not replicating for all slaves. Consider reviewing the redis replication status.[copy]
# 2.5.5. Redis replication broken
- Redis instance lost a slave[copy]
# 2.5.6. Redis cluster flapping
- Changes have been detected in Redis replica connection. This can occur when replica nodes lose connection to the master and reconnect (a.k.a flapping).[copy]
# 2.5.7. Redis missing backup
- Redis has not been backuped for 24 hours[copy]
# 2.5.8. Redis out of system memory
- Redis is running out of system memory (> 90%)[copy]
# 2.5.9. Redis out of configured maxmemory
- Redis is running out of configured maxmemory (> 90%)[copy]
# 2.5.10. Redis too many connections
- Redis instance has too many connections[copy]
# 2.5.11. Redis not enough connections
- Redis instance should have more connections (> 5)[copy]
# 2.5.12. Redis rejected connections
- Some connections to Redis has been rejected[copy]
# 2.6.1. MongoDB : percona/mongodb_exporter (8 rules)[copy section]
# 2.6.1.1. MongoDB Down
- MongoDB instance is down[copy]
# 2.6.1.2. Mongodb replica member unhealthy
- MongoDB replica member is not healthy[copy]
# 2.6.1.3. MongoDB replication lag
- Mongodb replication lag is more than 10s[copy]
# 2.6.1.4. MongoDB replication headroom
- MongoDB replication headroom is <= 0[copy]
# 2.6.1.5. MongoDB number cursors open
- Too many cursors opened by MongoDB for clients (> 10k)[copy]
# 2.6.1.6. MongoDB cursors timeouts
- Too many cursors are timing out[copy]
# 2.6.1.7. MongoDB too many connections
- Too many connections (> 80%)[copy]
# 2.6.1.8. MongoDB virtual memory usage
- High memory usage[copy]
# 2.6.2. MongoDB : dcu/mongodb_exporter (10 rules)[copy section]
# 2.6.2.1. MongoDB replication lag
- Mongodb replication lag is more than 10s[copy]
# 2.6.2.2. MongoDB replication Status 3
- MongoDB Replication set member either perform startup self-checks, or transition from completing a rollback or resync[copy]
# 2.6.2.3. MongoDB replication Status 6
- MongoDB Replication set member as seen from another member of the set, is not yet known[copy]
# 2.6.2.4. MongoDB replication Status 8
- MongoDB Replication set member as seen from another member of the set, is unreachable[copy]
# 2.6.2.5. MongoDB replication Status 9
- MongoDB Replication set member is actively performing a rollback. Data is not available for reads[copy]
# 2.6.2.6. MongoDB replication Status 10
- MongoDB Replication set member was once in a replica set but was subsequently removed[copy]
# 2.6.2.7. MongoDB number cursors open
- Too many cursors opened by MongoDB for clients (> 10k)[copy]
# 2.6.2.8. MongoDB cursors timeouts
- Too many cursors are timing out[copy]
# 2.6.2.9. MongoDB too many connections
- Too many connections (> 80%)[copy]
# 2.6.2.10. MongoDB virtual memory usage
- High memory usage[copy]
# 2.6.3. MongoDB : stefanprodan/mgob (1 rules)[copy section]
# 2.6.3.1. Mgob backup failed
- MongoDB backup has failed[copy]
# 2.7.1. RabbitMQ : rabbitmq/rabbitmq-prometheus (9 rules)[copy section]
# 2.7.1.1. Rabbitmq node down
- Less than 3 nodes running in RabbitMQ cluster[copy]
# 2.7.1.2. Rabbitmq node not distributed
- Distribution link state is not 'up'[copy]
# 2.7.1.3. Rabbitmq instances different versions
- Running different version of Rabbitmq in the same cluster, can lead to failure.[copy]
# 2.7.1.4. Rabbitmq memory high
- A node use more than 90% of allocated RAM[copy]
# 2.7.1.5. Rabbitmq file descriptors usage
- A node use more than 90% of file descriptors[copy]
# 2.7.1.6. Rabbitmq too many unack messages
- Too many unacknowledged messages[copy]
# 2.7.1.7. Rabbitmq too many connections
- The total connections of a node is too high[copy]
# 2.7.1.8. Rabbitmq no queue consumer
- A queue has less than 1 consumer[copy]
# 2.7.1.9. Rabbitmq unroutable messages
- A queue has unroutable messages[copy]
# 2.7.2. RabbitMQ : kbudde/rabbitmq-exporter (11 rules)[copy section]
# 2.7.2.1. Rabbitmq down
- RabbitMQ node down[copy]
# 2.7.2.2. Rabbitmq cluster down
- Less than 3 nodes running in RabbitMQ cluster[copy]
# 2.7.2.3. Rabbitmq cluster partition
- Cluster partition[copy]
# 2.7.2.4. Rabbitmq out of memory
- Memory available for RabbmitMQ is low (< 10%)[copy]
# 2.7.2.5. Rabbitmq too many connections
- RabbitMQ instance has too many connections (> 1000)[copy]
# 2.7.2.6. Rabbitmq dead letter queue filling up
- Dead letter queue is filling up (> 10 msgs)[copy]
# 2.7.2.7. Rabbitmq too many messages in queue
- Queue is filling up (> 1000 msgs)[copy]
# 2.7.2.8. Rabbitmq slow queue consuming
- Queue messages are consumed slowly (> 60s)[copy]
# 2.7.2.9. Rabbitmq no consumer
- Queue has no consumer[copy]
# 2.7.2.10. Rabbitmq too many consumers
- Queue should have only 1 consumer[copy]
# 2.7.2.11. Rabbitmq unactive exchange
- Exchange receive less than 5 msgs per second[copy]
# 2.8. Elasticsearch : justwatchcom/elasticsearch_exporter (15 rules)[copy section]
# 2.8.1. Elasticsearch Heap Usage Too High
- The heap usage is over 90%[copy]
# 2.8.2. Elasticsearch Heap Usage warning
- The heap usage is over 80%[copy]
# 2.8.3. Elasticsearch disk out of space
- The disk usage is over 90%[copy]
# 2.8.4. Elasticsearch disk space low
- The disk usage is over 80%[copy]
# 2.8.5. Elasticsearch Cluster Red
- Elastic Cluster Red status[copy]
# 2.8.6. Elasticsearch Cluster Yellow
- Elastic Cluster Yellow status[copy]
# 2.8.7. Elasticsearch Healthy Nodes
- Missing node in Elasticsearch cluster[copy]
# 2.8.8. Elasticsearch Healthy Data Nodes
- Missing data node in Elasticsearch cluster[copy]
# 2.8.9. Elasticsearch relocating shards
- Elasticsearch is relocating shards[copy]
# 2.8.10. Elasticsearch relocating shards too long
- Elasticsearch has been relocating shards for 15min[copy]
# 2.8.11. Elasticsearch initializing shards
- Elasticsearch is initializing shards[copy]
# 2.8.12. Elasticsearch initializing shards too long
- Elasticsearch has been initializing shards for 15 min[copy]
# 2.8.13. Elasticsearch unassigned shards
- Elasticsearch has unassigned shards[copy]
# 2.8.14. Elasticsearch pending tasks
- Elasticsearch has pending tasks. Cluster works slowly.[copy]
# 2.8.15. Elasticsearch no new documents
- No new documents for 10 min![copy]
# 2.9.1. Cassandra : instaclustr/cassandra-exporter (12 rules)[copy section]
# 2.9.1.1. Cassandra Node is unavailable
- Cassandra Node is unavailable - {{ $labels.cassandra_cluster }} {{ $labels.exported_endpoint }}[copy]
# 2.9.1.2. Cassandra many compaction tasks are pending
- Many Cassandra compaction tasks are pending - {{ $labels.cassandra_cluster }}[copy]
# 2.9.1.3. Cassandra commitlog pending tasks
- Cassandra commitlog pending tasks - {{ $labels.cassandra_cluster }}[copy]
# 2.9.1.4. Cassandra compaction executor blocked tasks
- Some Cassandra compaction executor tasks are blocked - {{ $labels.cassandra_cluster }}[copy]
# 2.9.1.5. Cassandra flush writer blocked tasks
- Some Cassandra flush writer tasks are blocked - {{ $labels.cassandra_cluster }}[copy]
# 2.9.1.6. Cassandra connection timeouts total
- Some connection between nodes are ending in timeout - {{ $labels.cassandra_cluster }}[copy]
# 2.9.1.7. Cassandra storage exceptions
- Something is going wrong with cassandra storage - {{ $labels.cassandra_cluster }}[copy]
# 2.9.1.8. Cassandra tombstone dump
- Cassandra tombstone dump - {{ $labels.cassandra_cluster }}[copy]
# 2.9.1.9. Cassandra client request unvailable write
- Some Cassandra client requests are unvailable to write - {{ $labels.cassandra_cluster }}[copy]
# 2.9.1.10. Cassandra client request unvailable read
- Some Cassandra client requests are unvailable to read - {{ $labels.cassandra_cluster }}[copy]
# 2.9.1.11. Cassandra client request write failure
- Read failures have occurred, ensure there are not too many unavailable nodes - {{ $labels.cassandra_cluster }}[copy]
# 2.9.1.12. Cassandra client request read failure
- Read failures have occurred, ensure there are not too many unavailable nodes - {{ $labels.cassandra_cluster }}[copy]
# 2.9.2. Cassandra : criteo/cassandra_exporter (18 rules)[copy section]
# 2.9.2.1. Cassandra hints count
- Cassandra hints count has changed on {{ $labels.instance }} some nodes may go down[copy]
# 2.9.2.2. Cassandra compaction task pending
- Many Cassandra compaction tasks are pending. You might need to increase I/O capacity by adding nodes to the cluster.[copy]
# 2.9.2.3. Cassandra viewwrite latency
- High viewwrite latency on {{ $labels.instance }} cassandra node[copy]
# 2.9.2.4. Cassandra bad hacker
- Increase of Cassandra authentication failures[copy]
# 2.9.2.5. Cassandra node down
- Cassandra node down[copy]
# 2.9.2.6. Cassandra commitlog pending tasks
- Unexpected number of Cassandra commitlog pending tasks[copy]
# 2.9.2.7. Cassandra compaction executor blocked tasks
- Some Cassandra compaction executor tasks are blocked[copy]
# 2.9.2.8. Cassandra flush writer blocked tasks
- Some Cassandra flush writer tasks are blocked[copy]
# 2.9.2.9. Cassandra repair pending tasks
- Some Cassandra repair tasks are pending[copy]
# 2.9.2.10. Cassandra repair blocked tasks
- Some Cassandra repair tasks are blocked[copy]
# 2.9.2.11. Cassandra connection timeouts total
- Some connection between nodes are ending in timeout[copy]
# 2.9.2.12. Cassandra storage exceptions
- Something is going wrong with cassandra storage[copy]
# 2.9.2.13. Cassandra tombstone dump
- Too much tombstones scanned in queries[copy]
# 2.9.2.14. Cassandra client request unvailable write
- Write failures have occurred because too many nodes are unavailable[copy]
# 2.9.2.15. Cassandra client request unvailable read
- Read failures have occurred because too many nodes are unavailable[copy]
# 2.9.2.16. Cassandra client request write failure
- A lot of write failures encountered. A write failure is a non-timeout exception encountered during a write request. Examine the reason map to find to the root cause. The most common cause for this type of error is when batch sizes are too large.[copy]
# 2.9.2.17. Cassandra client request read failure
- A lot of read failures encountered. A read failure is a non-timeout exception encountered during a read request. Examine the reason map to find to the root cause. The most common cause for this type of error is when batch sizes are too large.[copy]
# 2.9.2.18. Cassandra cache hit rate key cache
- Key cache hit rate is below 85%[copy]
# 2.10.1. Zookeeper : cloudflare/kafka_zookeeper_exporter
# 2.10.2. Zookeeper : dabealu/zookeeper-exporter (4 rules)[copy section]
# 2.10.2.1. Zookeeper Down
- Zookeeper down on instance {{ $labels.instance }}[copy]
# 2.10.2.2. Zookeeper missing leader
- Zookeeper cluster has no node marked as leader[copy]
# 2.10.2.3. Zookeeper Too Many Leaders
- Zookeeper cluster has too many nodes marked as leader[copy]
# 2.10.2.4. Zookeeper Not Ok
- Zookeeper instance is not ok[copy]
# 2.11.1. Kafka : danielqsj/kafka_exporter (2 rules)[copy section]
# 2.11.1.1. Kafka topics replicas
- Kafka topic in-sync partition[copy]
# 2.11.1.2. Kafka consumers group
- Kafka consumers group[copy]
# 2.11.2. Kafka : linkedin/Burrow (2 rules)[copy section]
# 2.11.2.1. Kafka topic offset decreased
- Kafka topic offset has decreased[copy]
# 2.11.2.2. Kafka consumer lag
- Kafka consumer has a 30 minutes and increasing lag[copy]
# 2.12. Pulsar : embedded exporter (10 rules)[copy section]
# 2.12.1. Pulsar subscription high number of backlog entries
- The number of subscription backlog entries is over 5k[copy]
# 2.12.2. Pulsar subscription very high number of backlog entries
- The number of subscription backlog entries is over 100k[copy]
# 2.12.3. Pulsar topic large backlog storage size
- The topic backlog storage size is over 5 GB[copy]
# 2.12.4. Pulsar topic very large backlog storage size
- The topic backlog storage size is over 20 GB[copy]
# 2.12.5. Pulsar high write latency
- Messages cannot be written in a timely fashion[copy]
# 2.12.6. Pulsar large message payload
- Observing large message payload (> 1MB)[copy]
# 2.12.7. Pulsar high ledger disk usage
- Observing Ledger Disk Usage (> 75%)[copy]
# 2.12.8. Pulsar read only bookies
- Observing Readonly Bookies[copy]
# 2.12.9. Pulsar high number of function errors
- Observing more than 10 Function errors per minute[copy]
# 2.12.10. Pulsar high number of sink errors
- Observing more than 10 Sink errors per minute[copy]
# 2.13. Solr : embedded exporter (4 rules)[copy section]
# 2.13.1. Solr update errors
- Solr collection {{ $labels.collection }} has failed updates for replica {{ $labels.replica }} on {{ $labels.base_url }}.[copy]
# 2.13.2. Solr query errors
- Solr has increased query errors in collection {{ $labels.collection }} for replica {{ $labels.replica }} on {{ $labels.base_url }}.[copy]
# 2.13.3. Solr replication errors
- Solr collection {{ $labels.collection }} has failed updates for replica {{ $labels.replica }} on {{ $labels.base_url }}.[copy]
# 2.13.4. Solr low live node count
- Solr collection {{ $labels.collection }} has less than two live nodes for replica {{ $labels.replica }} on {{ $labels.base_url }}.[copy]
# 3.1. Nginx : knyar/nginx-lua-prometheus (3 rules)[copy section]
# 3.1.1. Nginx high HTTP 4xx error rate
- Too many HTTP requests with status 4xx (> 5%)[copy]
# 3.1.2. Nginx high HTTP 5xx error rate
- Too many HTTP requests with status 5xx (> 5%)[copy]
# 3.1.3. Nginx latency high
- Nginx p99 latency is higher than 3 seconds[copy]
# 3.2. Apache : Lusitaniae/apache_exporter (3 rules)[copy section]
# 3.2.1. Apache down
- Apache down[copy]
# 3.2.2. Apache workers load
- Apache workers in busy state approach the max workers count 80% workers busy on {{ $labels.instance }}[copy]
# 3.2.3. Apache restart
- Apache has just been restarted.[copy]
# 3.3.1. HaProxy : Embedded exporter (HAProxy >= v2) (14 rules)[copy section]
# 3.3.1.1. HAProxy high HTTP 4xx error rate backend
- Too many HTTP requests with status 4xx (> 5%) on backend {{ $labels.fqdn }}/{{ $labels.backend }}[copy]
# 3.3.1.2. HAProxy high HTTP 5xx error rate backend
- Too many HTTP requests with status 5xx (> 5%) on backend {{ $labels.fqdn }}/{{ $labels.backend }}[copy]
# 3.3.1.3. HAProxy high HTTP 4xx error rate server
- Too many HTTP requests with status 4xx (> 5%) on server {{ $labels.server }}[copy]
# 3.3.1.4. HAProxy high HTTP 5xx error rate server
- Too many HTTP requests with status 5xx (> 5%) on server {{ $labels.server }}[copy]
# 3.3.1.5. HAProxy server response errors
- Too many response errors to {{ $labels.server }} server (> 5%).[copy]
# 3.3.1.6. HAProxy backend connection errors
- Too many connection errors to {{ $labels.fqdn }}/{{ $labels.backend }} backend (> 100 req/s). Request throughput may be too high.[copy]
# 3.3.1.7. HAProxy server connection errors
- Too many connection errors to {{ $labels.server }} server (> 100 req/s). Request throughput may be too high.[copy]
# 3.3.1.8. HAProxy backend max active session > 80%
- Session limit from backend {{ $labels.proxy }} to server {{ $labels.server }} reached 80% of limit - {{ $value | printf "%.2f"}}%[copy]
# 3.3.1.9. HAProxy pending requests
- Some HAProxy requests are pending on {{ $labels.proxy }} - {{ $value | printf "%.2f"}}[copy]
# 3.3.1.10. HAProxy HTTP slowing down
- Average request time is increasing - {{ $value | printf "%.2f"}}[copy]
# 3.3.1.11. HAProxy retry high
- High rate of retry on {{ $labels.proxy }} - {{ $value | printf "%.2f"}}[copy]
# 3.3.1.12. HAproxy has no alive backends
- HAProxy has no alive active or backup backends for {{ $labels.proxy }}[copy]
# 3.3.1.13. HAProxy frontend security blocked requests
- HAProxy is blocking requests for security reason[copy]
# 3.3.1.14. HAProxy server healthcheck failure
- Some server healthcheck are failing on {{ $labels.server }}[copy]
# 3.3.2. HaProxy : prometheus/haproxy_exporter (HAProxy < v2) (16 rules)[copy section]
# 3.3.2.1. HAProxy down
- HAProxy down[copy]
# 3.3.2.2. HAProxy high HTTP 4xx error rate backend
- Too many HTTP requests with status 4xx (> 5%) on backend {{ $labels.fqdn }}/{{ $labels.backend }}[copy]
# 3.3.2.3. HAProxy high HTTP 5xx error rate backend
- Too many HTTP requests with status 5xx (> 5%) on backend {{ $labels.fqdn }}/{{ $labels.backend }}[copy]
# 3.3.2.4. HAProxy high HTTP 4xx error rate server
- Too many HTTP requests with status 4xx (> 5%) on server {{ $labels.server }}[copy]
# 3.3.2.5. HAProxy high HTTP 5xx error rate server
- Too many HTTP requests with status 5xx (> 5%) on server {{ $labels.server }}[copy]
# 3.3.2.6. HAProxy server response errors
- Too many response errors to {{ $labels.server }} server (> 5%).[copy]
# 3.3.2.7. HAProxy backend connection errors
- Too many connection errors to {{ $labels.fqdn }}/{{ $labels.backend }} backend (> 100 req/s). Request throughput may be too high.[copy]
# 3.3.2.8. HAProxy server connection errors
- Too many connection errors to {{ $labels.server }} server (> 100 req/s). Request throughput may be too high.[copy]
# 3.3.2.9. HAProxy backend max active session
- HAproxy backend {{ $labels.fqdn }}/{{ $labels.backend }} is reaching session limit (> 80%).[copy]
# 3.3.2.10. HAProxy pending requests
- Some HAProxy requests are pending on {{ $labels.fqdn }}/{{ $labels.backend }} backend[copy]
# 3.3.2.11. HAProxy HTTP slowing down
- Average request time is increasing[copy]
# 3.3.2.12. HAProxy retry high
- High rate of retry on {{ $labels.fqdn }}/{{ $labels.backend }} backend[copy]
# 3.3.2.13. HAProxy backend down
- HAProxy backend is down[copy]
# 3.3.2.14. HAProxy server down
- HAProxy server is down[copy]
# 3.3.2.15. HAProxy frontend security blocked requests
- HAProxy is blocking requests for security reason[copy]
# 3.3.2.16. HAProxy server healthcheck failure
- Some server healthcheck are failing on {{ $labels.server }}[copy]
# 3.4.1. Traefik : Embedded exporter v2 (3 rules)[copy section]
# 3.4.1.1. Traefik service down
- All Traefik services are down[copy]
# 3.4.1.2. Traefik high HTTP 4xx error rate service
- Traefik service 4xx error rate is above 5%[copy]
# 3.4.1.3. Traefik high HTTP 5xx error rate service
- Traefik service 5xx error rate is above 5%[copy]
# 3.4.2. Traefik : Embedded exporter v1 (3 rules)[copy section]
# 3.4.2.1. Traefik backend down
- All Traefik backends are down[copy]
# 3.4.2.2. Traefik high HTTP 4xx error rate backend
- Traefik backend 4xx error rate is above 5%[copy]
# 3.4.2.3. Traefik high HTTP 5xx error rate backend
- Traefik backend 5xx error rate is above 5%[copy]
# 4.1. PHP-FPM : bakins/php-fpm-exporter (1 rules)[copy section]
# 4.1.1. PHP-FPM max-children reached
- PHP-FPM reached max children - {{ $labels.instance }}[copy]
# 4.2. JVM : java-client (1 rules)[copy section]
# 4.2.1. JVM memory filling up
- JVM memory is filling up (> 80%)[copy]
# 4.3. Sidekiq : Strech/sidekiq-prometheus-exporter (2 rules)[copy section]
# 4.3.1. Sidekiq queue size
- Sidekiq queue {{ $ }} is growing[copy]
# 4.3.2. Sidekiq scheduling latency too high
- Sidekiq jobs are taking more than 1min to be picked up. Users may be seeing delays in background processing.[copy]
# 5.1. Kubernetes : kube-state-metrics (33 rules)[copy section]
# 5.1.1. Kubernetes Node ready
- Node {{ $labels.node }} has been unready for a long time[copy]
# 5.1.2. Kubernetes memory pressure
- {{ $labels.node }} has MemoryPressure condition[copy]
# 5.1.3. Kubernetes disk pressure
- {{ $labels.node }} has DiskPressure condition[copy]
# 5.1.4. Kubernetes out of disk
- {{ $labels.node }} has OutOfDisk condition[copy]
# 5.1.5. Kubernetes out of capacity
- {{ $labels.node }} is out of capacity[copy]
# 5.1.6. Kubernetes container oom killer
- Container {{ $labels.container }} in pod {{ $space }}/{{ $labels.pod }} has been OOMKilled {{ $value }} times in the last 10 minutes.[copy]
# 5.1.7. Kubernetes Job failed
- Job {{$space}}/{{$labels.exported_job}} failed to complete[copy]
# 5.1.8. Kubernetes CronJob suspended
- CronJob {{ $space }}/{{ $labels.cronjob }} is suspended[copy]
# 5.1.9. Kubernetes PersistentVolumeClaim pending
- PersistentVolumeClaim {{ $space }}/{{ $labels.persistentvolumeclaim }} is pending[copy]
# 5.1.10. Kubernetes Volume out of disk space
- Volume is almost full (< 10% left)[copy]
# 5.1.11. Kubernetes Volume full in four days
- {{ $space }}/{{ $labels.persistentvolumeclaim }} is expected to fill up within four days. Currently {{ $value | humanize }}% is available.[copy]
# 5.1.12. Kubernetes PersistentVolume error
- Persistent volume is in bad state[copy]
# 5.1.13. Kubernetes StatefulSet down
- A StatefulSet went down[copy]
# 5.1.14. Kubernetes HPA scaling ability
- Pod is unable to scale[copy]
# 5.1.15. Kubernetes HPA metric availability
- HPA is not able to collect metrics[copy]
# 5.1.16. Kubernetes HPA scale capability
- The maximum number of desired Pods has been hit[copy]
# 5.1.17. Kubernetes Pod not healthy
- Pod has been in a non-ready state for longer than 15 minutes.[copy]
# 5.1.18. Kubernetes pod crash looping
- Pod {{ $labels.pod }} is crash looping[copy]
# 5.1.19. Kubernetes ReplicasSet mismatch
- Deployment Replicas mismatch[copy]
# 5.1.20. Kubernetes Deployment replicas mismatch
- Deployment Replicas mismatch[copy]
# 5.1.21. Kubernetes StatefulSet replicas mismatch
- A StatefulSet does not match the expected number of replicas.[copy]
# 5.1.22. Kubernetes Deployment generation mismatch
- A Deployment has failed but has not been rolled back.[copy]
# 5.1.23. Kubernetes StatefulSet generation mismatch
- A StatefulSet has failed but has not been rolled back.[copy]
# 5.1.24. Kubernetes StatefulSet update not rolled out
- StatefulSet update has not been rolled out.[copy]
# 5.1.25. Kubernetes DaemonSet rollout stuck
- Some Pods of DaemonSet are not scheduled or not ready[copy]
# 5.1.26. Kubernetes DaemonSet misscheduled
- Some DaemonSet Pods are running where they are not supposed to run[copy]
# 5.1.27. Kubernetes CronJob too long
- CronJob {{ $space }}/{{ $labels.cronjob }} is taking more than 1h to complete.[copy]
# 5.1.28. Kubernetes job slow completion
- Kubernetes Job {{ $space }}/{{ $labels.job_name }} did not complete in time.[copy]
# 5.1.29. Kubernetes API server errors
- Kubernetes API server is experiencing high error rate[copy]
# 5.1.30. Kubernetes API client errors
- Kubernetes API client is experiencing high error rate[copy]
# 5.1.31. Kubernetes client certificate expires next week
- A client certificate used to authenticate to the apiserver is expiring next week.[copy]
# 5.1.32. Kubernetes client certificate expires soon
- A client certificate used to authenticate to the apiserver is expiring in less than 24.0 hours.[copy]
# 5.1.33. Kubernetes API server latency
- Kubernetes API server has a 99th percentile latency of {{ $value }} seconds for {{ $labels.verb }} {{ $labels.resource }}.[copy]
# 5.2. Nomad : Embedded exporter (4 rules)[copy section]
# 5.2.1. Nomad job failed
- Nomad job failed[copy]
# 5.2.2. Nomad job lost
- Nomad job lost[copy]
# 5.2.3. Nomad job queued
- Nomad job queued[copy]
# 5.2.4. Nomad blocked evaluation
- Nomad blocked evaluation[copy]
# 5.3. Consul : prometheus/consul_exporter (3 rules)[copy section]
# 5.3.1. Consul service healthcheck failed
- Service: `{{ $labels.service_name }}` Healthcheck: `{{ $labels.service_id }}`[copy]
# 5.3.2. Consul missing master node
- Numbers of consul raft peers should be 3, in order to preserve quorum.[copy]
# 5.3.3. Consul agent unhealthy
- A Consul agent is down[copy]
# 5.4. Etcd : Embedded exporter (13 rules)[copy section]
# 5.4.1. Etcd insufficient Members
- Etcd cluster should have an odd number of members[copy]
# 5.4.2. Etcd no Leader
- Etcd cluster have no leader[copy]
# 5.4.3. Etcd high number of leader changes
- Etcd leader changed more than 2 times during 10 minutes[copy]
# 5.4.4. Etcd high number of failed GRPC requests
- More than 1% GRPC request failure detected in Etcd[copy]
# 5.4.5. Etcd high number of failed GRPC requests
- More than 5% GRPC request failure detected in Etcd[copy]
# 5.4.6. Etcd GRPC requests slow
- GRPC requests slowing down, 99th percentile is over 0.15s[copy]
# 5.4.7. Etcd high number of failed HTTP requests
- More than 1% HTTP failure detected in Etcd[copy]
# 5.4.8. Etcd high number of failed HTTP requests
- More than 5% HTTP failure detected in Etcd[copy]
# 5.4.9. Etcd HTTP requests slow
- HTTP requests slowing down, 99th percentile is over 0.15s[copy]
# 5.4.10. Etcd member communication slow
- Etcd member communication slowing down, 99th percentile is over 0.15s[copy]
# 5.4.11. Etcd high number of failed proposals
- Etcd server got more than 5 failed proposals past hour[copy]
# 5.4.12. Etcd high fsync durations
- Etcd WAL fsync duration increasing, 99th percentile is over 0.5s[copy]
# 5.4.13. Etcd high commit durations
- Etcd commit duration increasing, 99th percentile is over 0.25s[copy]
# 5.5. Linkerd : Embedded exporter (1 rules)[copy section]
# 5.5.1. Linkerd high error rate
- Linkerd error rate for {{ $labels.deployment | $labels.statefulset | $labels.daemonset }} is over 10%[copy]
# 5.6. Istio : Embedded exporter (10 rules)[copy section]
# 5.6.1. Istio Kubernetes gateway availability drop
- Gateway pods have dropped. Inbound traffic will likely be affected.[copy]
# 5.6.2. Istio Pilot high total request rate
- Number of Istio Pilot push errors is too high (> 5%). Envoy sidecars might have outdated configuration.[copy]
# 5.6.3. Istio Mixer Prometheus dispatches low
- Number of Mixer dispatches to Prometheus is too low. Istio metrics might not be being exported properly.[copy]
# 5.6.4. Istio high total request rate
- Global request rate in the service mesh is unusually high.[copy]
# 5.6.5. Istio low total request rate
- Global request rate in the service mesh is unusually low.[copy]
# 5.6.6. Istio high 4xx error rate
- High percentage of HTTP 5xx responses in Istio (> 5%).[copy]
# 5.6.7. Istio high 5xx error rate
- High percentage of HTTP 5xx responses in Istio (> 5%).[copy]
# 5.6.8. Istio high request latency
- Istio average requests execution is longer than 100ms.[copy]
# 5.6.9. Istio latency 99 percentile
- Istio 1% slowest requests are longer than 1s.[copy]
# 5.6.10. Istio Pilot Duplicate Entry
- Istio pilot duplicate entry error.[copy]
# 6.1. Ceph : Embedded exporter (13 rules)[copy section]
# 6.1.1. Ceph State
- Ceph instance unhealthy[copy]
# 6.1.2. Ceph monitor clock skew
- Ceph monitor clock skew detected. Please check ntp and hardware clock settings[copy]
# 6.1.3. Ceph monitor low space
- Ceph monitor storage is low.[copy]
# 6.1.4. Ceph OSD Down
- Ceph Object Storage Daemon Down[copy]
# 6.1.5. Ceph high OSD latency
- Ceph Object Storage Daemon latency is high. Please check if it doesn't stuck in weird state.[copy]
# 6.1.6. Ceph OSD low space
- Ceph Object Storage Daemon is going out of space. Please add more disks.[copy]
# 6.1.7. Ceph OSD reweighted
- Ceph Object Storage Daemon takes too much time to resize.[copy]
# 6.1.8. Ceph PG down
- Some Ceph placement groups are down. Please ensure that all the data are available.[copy]
# 6.1.9. Ceph PG incomplete
- Some Ceph placement groups are incomplete. Please ensure that all the data are available.[copy]
# 6.1.10. Ceph PG inconsistent
- Some Ceph placement groups are inconsistent. Data is available but inconsistent across nodes.[copy]
# 6.1.11. Ceph PG activation long
- Some Ceph placement groups are too long to activate.[copy]
# 6.1.12. Ceph PG backfill full
- Some Ceph placement groups are located on full Object Storage Daemon on cluster. Those PGs can be unavailable shortly. Please check OSDs, change weight or reconfigure CRUSH rules.[copy]
# 6.1.13. Ceph PG unavailable
- Some Ceph placement groups are unavailable.[copy]
# 6.2. SpeedTest : Speedtest exporter (2 rules)[copy section]
# 6.2.1. SpeedTest Slow Internet Download
- Internet download speed is currently {{humanize $value}} Mbps.[copy]
# 6.2.2. SpeedTest Slow Internet Upload
- Internet upload speed is currently {{humanize $value}} Mbps.[copy]
# 6.3. ZFS : node-exporter (1 rules)[copy section]
# 6.3.1. ZFS offline pool
- A ZFS zpool is in a unexpected state: {{ $labels.state }}.[copy]
# 6.4. OpenEBS : Embedded exporter (1 rules)[copy section]
# 6.4.1. OpenEBS used pool capacity
- OpenEBS Pool use more than 80% of his capacity[copy]
# 6.5. Minio : Embedded exporter (2 rules)[copy section]
# 6.5.1. Minio disk offline
- Minio disk is offline[copy]
# 6.5.2. Minio disk space usage
- Minio available free space is low (< 10%)[copy]
# 6.6. SSL/TLS : ssl_exporter (4 rules)[copy section]
# 6.6.1. SSL certificate probe failed
- Failed to fetch SSL information {{ $labels.instance }}[copy]
# 6.6.2. SSL certificate OSCP status unknown
- Failed to get the OSCP status {{ $labels.instance }}[copy]
# 6.6.3. SSL certificate revoked
- SSL certificate revoked {{ $labels.instance }}[copy]
# 6.6.4. SSL certificate expiry (< 7 days)
- {{ $labels.instance }} Certificate is expiring in 7 days[copy]
# 6.7. Juniper : czerwonk/junos_exporter (3 rules)[copy section]
# 6.7.1. Juniper switch down
- The switch appears to be down[copy]
# 6.7.2. Juniper high Bandwidth Usage 1GiB
- Interface is highly saturated. (> 0.90GiB/s)[copy]
# 6.7.3. Juniper high Bandwidth Usage 1GiB
- Interface is getting saturated. (> 0.80GiB/s)[copy]
# 6.8. CoreDNS : Embedded exporter (1 rules)[copy section]
# 6.8.1. CoreDNS Panic Count
- Number of CoreDNS panics encountered[copy]
# 6.9. Freeswitch : znerol/prometheus-freeswitch-exporter (3 rules)[copy section]
# 6.9.1. Freeswitch down
- Freeswitch is unresponsive[copy]
# 6.9.2. Freeswitch Sessions Warning
- High sessions usage on {{ $labels.instance }}: {{ $value | printf "%.2f"}}%[copy]
# 6.9.3. Freeswitch Sessions Critical
- High sessions usage on {{ $labels.instance }}: {{ $value | printf "%.2f"}}%[copy]
# 6.10. Hashicorp Vault : Embedded exporter (3 rules)[copy section]
# 6.10.1. Vault sealed
- Vault instance is sealed on {{ $labels.instance }}[copy]
# 6.10.2. Vault too many pending tokens
- Too many pending tokens {{ $labels.instance }}: {{ $value | printf "%.2f"}}%[copy]
# 6.10.3. Vault too many infinity tokens
- Too many infinity tokens {{ $labels.instance }}: {{ $value | printf "%.2f"}}%[copy]
# 7.1. Thanos : Embedded exporter (3 rules)[copy section]
# 7.1.1. Thanos compaction halted
- Thanos compaction has failed to run and is now halted.[copy]
# 7.1.2. Thanos compact bucket operation failure
- Thanos compaction has failing storage operations[copy]
# 7.1.3. Thanos compact not run
- Thanos compaction has not run in 24 hours.[copy]
# 7.2. Loki : Embedded exporter (4 rules)[copy section]
# 7.2.1. Loki process too many restarts
- A loki process had too many restarts (target {{ $labels.instance }})[copy]
# 7.2.2. Loki request errors
- The {{ $labels.job }} and {{ $labels.route }} are experiencing errors[copy]
# 7.2.3. Loki request panic
- The {{ $labels.job }} is experiencing {{ printf "%.2f" $value }}% increase of panics[copy]
# 7.2.4. Loki request latency
- The {{ $labels.job }} {{ $labels.route }} is experiencing {{ printf "%.2f" $value }}s 99th percentile latency[copy]
# 7.3. Promtail : Embedded exporter (2 rules)[copy section]
# 7.3.1. Promtail request errors
- The {{ $labels.job }} {{ $labels.route }} is experiencing {{ printf "%.2f" $value }}% errors.[copy]
# 7.3.2. Promtail request latency
- The {{ $labels.job }} {{ $labels.route }} is experiencing {{ printf "%.2f" $value }}s 99th percentile latency.[copy]
# 7.4. Cortex : Embedded exporter (6 rules)[copy section]
# 7.4.1. Cortex ruler configuration reload failure
- Cortex ruler configuration reload failure (instance {{ $labels.instance }})[copy]
# 7.4.2. Cortex not connected to Alertmanager
- Cortex not connected to Alertmanager (instance {{ $labels.instance }})[copy]
# 7.4.3. Cortex notification are being dropped
- Cortex notification are being dropped due to errors (instance {{ $labels.instance }})[copy]
















