有些返回字典里面的项太多了,我就根据第二层字典进行了拆分,一部分项进行了解释说明,还有一些由于我这里网页的问题,只能通过7.0的参数进行对比,所以会有不少参数没有说明。
import json
from MongoDB_Conn import MongoDB_Conn
class MongoDB_status:
@staticmethod
def get_mongodb_status(client):
"""
获取MongoDB的serverStatus.
:return:mongodb_status_dict
"""
admin = client.admin
mongodb_status_dict = str(admin.command('serverStatus'))
return json.dumps(mongodb_status_dict)
@staticmethod
def mongodb_status_base(mongodb_status_dict):
"""
host: 系统的主机名。在 Unix/Linux 系统中,这应该与hostname 命令的输出相同。
advisoryHostFQDNs: 系统的完全限定域名 (FQDN) 的数组。
version: 当前 MongoDB 进程的 MongoDB 版本。
process: 当前的 MongoDB 进程。可能的值为:mongos 或 mongod
pid: 进程 ID 号。
uptime: 当前 MongoDB 进程处于活动状态的秒数。
uptimeMillis: 当前 MongoDB 进程处于活动状态的毫秒数。
uptimeEstimate: 根据 MongoDB 的内部粗粒度计时系统计算的运行时间(以秒为单位)。
localTime: ISO Date 根据服务器以 UTC 方式当前时间。
:param mongodb_status_dict:
:return:
"""
mongodb_status_base = {
'host': mongodb_status_dict['host'],
'version': mongodb_status_dict['version'],
'process': mongodb_status_dict['process'],
'pid': mongodb_status_dict['pid'],
'uptime': mongodb_status_dict['uptime'],
'uptimeMillis': mongodb_status_dict['uptimeMillis'],
'uptimeEstimate': mongodb_status_dict['uptimeEstimate'],
'localTime': str(mongodb_status_dict['localTime'])
}
return json.dumps(mongodb_status_base)
@staticmethod
def mongodb_status_asserts(mongodb_status_dict):
"""
asserts: 报告自 MongoDB 进程启动以来引发的断言数量。断言是对数据库运行时发生的错误进行的内部检查,可以帮助诊断 MongoDB 服务器的问题。
非零断言值表示断言错误,这种错误并不常见,也不必立即引起关注。生成断言的错误可以记录在日志文件中,也可以直接返回给客户端应用程序以获取更多
信息。
regular:自 MongoDB 进程启动以来提出的常规断言数量。检查 MongoDB 日志以获取更多信息。
warning: 该字段始终返回零值 0。
msg: 自 MongoDB 进程启动以来提出的消息断言数量。检查日志文件以获取有关这些消息的详细信息。
user: 自上次启动 MongoDB 进程以来发生的“用户断言”的数量。这些是用户可能生成的错误,例如磁盘空间不足或重复密钥。您可以通过修复应用程序或
部署的问题来防止这些断言。检查日志文件以获取有关这些消息的详细信息。
rollovers: 自上次启动 MongoDB 进程以来断言计数器滚动的次数。在2 30断言后,计数器将翻转为零。使用此值可为asserts数据结构中的其他值提
供上下文。
:return:mongodb_status_asserts
"""
mongodb_status_asserts = mongodb_status_dict['asserts']
return json.dumps(mongodb_status_asserts)
@staticmethod
def mongodb_status_connections(mongodb_status_dict):
"""
connections:报告连接状态的一份文档。使用这些值来评估服务器当前的负载和容量要求。
current: 从客户端到数据库服务器的传入连接数。此数字包括当前 Shell 会话。考虑connections.available的值,以便为该数据添加更多上下文。
available: 该值将包括所有传入连接,包括任何 shell 连接或来自其他服务器的连接,例如副本集节点或 mongos 实例。
totalCreated: 可用的未使用传入连接数。请将此值与connections.current的值结合起来,以了解数据库上的连接负载,并考虑UNIX ulimit设置
文档以了解有关可用连接的系统阈值的更多信息。创建到服务器的所有传入连接的计数。此数字包括已关闭的连接。
rejected: 6.3 版本中的新增功能.服务器因服务器没有能力接受更多连接或达到 net.maxIncomingConnections 设置而拒绝的传入连接的数量。与
服务器的活跃客户端连接数。活跃客户端连接系指当前正在进行操作的客户端连接。
threaded: 5.0 版本中的新增功能.为客户端请求提供服务的线程所分配的客户端传入连接数。
exhaustIsMaster: 5.0 版本中的新增功能.最后一个请求是指定 isMaster 的 exhausAllowed。如果运行的是 MongoDB 5.0 或更高版本,请勿
使用 isMaster 命令。请改用 hello。最后请求是指定的.
awaitingTopologyChanges: 当前在 hello 或 isMaster 请求中等待拓扑结构更改的客户端的数量。
:param mongodb_status_dict:
:return:
"""
mongodb_status_connections = mongodb_status_dict['connections']
return json.dumps(mongodb_status_connections)
@staticmethod
def mongodb_status_defaultrwConcern(mongodb_status_dict):
"""
defaultRWConcern: 提供有关全局默认读关注或写关注设置的本地副本的信息。数据可能已过时或已过期。有关更多信息,
请参阅 getDefaultRWConcern。
localUpdateWallClockTime: 实例上次更新其任何全局读关注或写关注设置副本的本地系统挂钟时间。如果此字段是 下的 唯一
defaultRWConcern字段,则该实例从未了解全局默认的读关注或写关注设置。
:param mongodb_status_dict:
:return:
"""
mongodb_defaultrwConcern = mongodb_status_dict['defaultRWConcern']
return json.dumps(mongodb_defaultrwConcern)
@staticmethod
def mongodb_status_electionMetrics(mongodb_status_dict):
"""
electionMetrics: electionMetrics 部分提供有关此 mongod 实例试图成为主节点而调用的选举的信息.
electionMetrics.stepUpCmd: 当主节点降级后,作为选举移交的一部分,由 mongod 实例调用的选举指标。stepUpCmd包括召集的选举次数和成功
的选举次数。
electionMetrics.priorityTakeover: mongod 实例调用的选举指标,因为其 priority 高于主节点。electionMetrics.priorityTakeover
包括召集的选举次数和成功的选举次数。
electionMetrics.catchUpTakeover: mongod 实例调用的选举指标,因为它比主节点的日期更近。catchUpTakeover包括召集的选举次数和成功
的选举次数。
electionMetrics.electionTimeout: mongod 实例调用的选举指标,因为它无法在 settings.electionTimeoutMillis 内到达主节点。
electionTimeout包括召集的选举次数和成功的选举次数。
electionMetrics.freezeTimeout: mongod 实例在其 freeze period(在此期间,节点无法寻求选举)到期后调用的选举指标。
electionMetrics.freezeTimeout包括召集的选举次数和成功的选举次数。
electionMetrics.numStepDownsCausedByHigherTerm: mongod 实例因看到更高任期(具体来说,其他节点参与了额外选举)而降级的次数。
electionMetrics.numCatchUps: 作为新当选主节点的 mongod 实例必须赶上已知最高 oplog 条目的选举次数。
electionMetrics.numCatchUpsSucceeded: 作为新当选主节点的 mongod 实例成功赶上已知最高 oplog 条目的次数。
electionMetrics.numCatchUpsAlreadyCaughtUp: 作为新当选主节点的 mongod 实例由于当选时已被赶上而结束其追赶进程的次数。
electionMetrics.numCatchUpsSkipped: 作为新当选主节点的 mongod 实例跳过追赶进程的次数。
electionMetrics.numCatchUpsTimedOut: 作为新当选主节点的 mongod 实例由于 settings.catchUpTimeoutMillis 限制而结束其追赶进
程的次数。
electionMetrics.numCatchUpsFailedWithError: 新当选主节点的追赶进程因出错导致失败的次数。
electionMetrics.numCatchUpsFailedWithNewTerm: 其他一名(或多名成员)的任期较长(具体而言,其他成员参加了额外的选举),导致新当
选主节点的追赶进程终止的次数。
electionMetrics.numCatchUpsFailedWithReplSetAbortPrimaryCatchUpCmd: 由于 mongod 收到 replSetAbortPrimaryCatchUp
命令而导致新当选主节点的追赶进程结束的次数。
electionMetrics.averageCatchUpOps: 新当选的主节点在追赶进程时所应用操作的平均次数。
:param mongodb_status_dict:
:return: mongodb_electionMetrics
"""
mongodb_electionMetrics = mongodb_status_dict['electionMetrics']
return json.dumps(mongodb_electionMetrics)
@staticmethod
def mongodb_status_extra_info(mongodb_status_dict):
"""
extra_info: 提供有关底层系统的其他信息的文档。
extra_info.note: 带以下文本的字符串: 'fields vary by platform'
:param mongodb_status_dict:
:return:
"""
mongodb_extra_info = mongodb_status_dict['extra_info']
return json.dumps(mongodb_extra_info)
@staticmethod
def mongodb_stauts_flowControl(mongodb_status_dict):
"""
flowControl:返回流量控制统计信息的文档。启用流量控制后,随着 majority commit 点滞后逐渐接近 flowControlTargetLagSeconds,
主节点上的写入操作先获取票证才能获取锁。因此,在主节点上运行时,返回的指标很有意义。
enabled: 一个布尔值,用于表示流量控制处于启用状态 (true) 还是禁用状态 (false)。另请参阅 enableFlowControl 。
targetRateLimit: 在主节点上运行时,每秒可获取的最大票证数。在从节点上运行时,返回的数字是占位符。
timeAcquiringMicros: 在主节点上运行时,写入操作等待获取票证的总时间。在从节点上运行时,返回的数字是占位符。
locksPerKiloOp: 在主节点上运行时,表示每 1000 次操作占用的锁数的近似值。在从节点上运行时,返回的数字是占位符。
sustainerRate: 在主节点上运行时,维持提交点的从节点每秒应用的操作的近似值。在从节点上运行时,返回的数字是占位符。
isLagged: 在主节点上运行时,为一个布尔值,指示流量控制是否已启用。当多数提交延迟大于配置的 flowControlTargetLagSeconds 的某个百分比
时,将会启用流量控制。在不启用流量控制的情况下,可能会出现复制延迟。如果副本集未收到足够的负载来启用流量控制,无响应的从节点可能会出现延迟,
从而使flowControl.isLagged值保持为false。有关其他信息,请参阅流量控制。
isLaggedCount: 在主节点上运行时,流量控制自上次重启以来启用的次数。当多数提交延迟大于 flowControlTargetLagSeconds 的某个百分比时,
将会启用流量控制。在从节点上运行时,返回的数字是占位符。
isLaggedTimeMicros: 在主节点上运行时,流量控制自上次重启以来启用所用的时间。当多数提交延迟大于 flowControlTargetLagSeconds 的
某个百分比时,将会启用流量控制。在从节点上运行时,返回的数字是占位符。
:param mongodb_status_dict:
:return:
"""
mongodb_flowControl = mongodb_status_dict['mongodb_flowControl']
return json.dumps(mongodb_flowControl)
@staticmethod
def mongodb_status_globalLock(mongodb_status_dict):
"""
globalLock:一个文档,其中报告数据库锁定状态。一般来说,锁文档会提供有关锁使用情况的更详细数据。
globalLock.totalTime:自数据库上次启动和创建globalLock以来的时间(以微秒为单位)。这大约相当于服务器的总正常运行时间。
globalLock.currentQueue:提供有关因为锁定而排队的操作数量信息的文档。
globalLock.currentQueue.total:排队等待锁的操作总数(即globalLock.currentQueue.readers和globalLock.currentQueue.writers
的总和)。无需担心持续较小的队列,尤其是较短的操作。 globalLock.activeClients读取者和写入者信息为该数据提供了上下文。
globalLock.currentQueue.readers:当前排队并等待读锁的操作数。不必担心持续较小的读取队列,尤其是短操作的队列。
globalLock.currentQueue.writers:当前排队并等待写锁的操作数。不必担心持续较小的写队列,尤其是短操作的队列。
globalLock.activeClients:提供已连接客户端数量以及这些客户端执行的读写操作相关信息的文档。使用此数据为globalLock.currentQueue
数据提供上下文。
globalLock.activeClients.total:内部客户端连接数据库的总数,包括系统线程以及排队的读取者和写入者。由于包含系统线程,该指标将高于
activeClients.readers 和 activeClients.writers 的总和。
globalLock.activeClients.readers:执行读取操作的活动客户端连接的数量。
globalLock.activeClients.writers:执行写入操作的活动客户端连接的数量。
:param mongodb_status_dict:
:return:
"""
mongodb_globalLock = mongodb_status_dict['globalLock']
return json.dumps(mongodb_globalLock)
@staticmethod
def mongodb_status_locks(mongodb_status_dict):
"""
locks:报告每个锁 <type> 以及锁 <modes> 上的数据的文档。
locks.ParallelBatchWriterMode:代表并行批量写入模式的锁。在早期版本中,PBWM 信息作为 Global 锁信息的一部分进行报告。
locks.ParallelBatchWriterMode.acquireCount
locks.ParallelBatchWriterMode.acquireCount.r: 代表意向共享(IS)锁。
locks.ParallelBatchWriterMode.acquireCount.W: 代表独占 (X) 锁。
locks.FeatureCompatibilityVersion
locks.FeatureCompatibilityVersion.acquireCount: 在指定模式下获取锁的次数。
locks.FeatureCompatibilityVersion.acquireCount.r: 代表意向共享(IS)锁。
locks.FeatureCompatibilityVersion.acquireCount.w: 代表意图独占 (IX) 锁。
locks.ReplicationStateTransition: 表示副本集节点状态转换采用的锁。
locks.ReplicationStateTransition.acquireCount: 在指定模式下获取锁的次数。
locks.ReplicationStateTransition.acquireCount.w: 代表意图独占 (IX) 锁。
locks.ReplicationStateTransition.acquireCount.W: 代表独占 (X) 锁。
locks.ReplicationStateTransition.acquireWaitCount: 由于锁处于冲突模式而导致locks.<type>.acquireCount锁获取遇到等待的次数。
locks.ReplicationStateTransition.acquireWaitCount.w: 代表意图独占 (IX) 锁。
locks.ReplicationStateTransition.acquireWaitCount.W: 代表独占 (X) 锁。
locks.ReplicationStateTransition.timeAcquiringMicros:锁获取的累积等待时间(以微秒为单位)。locks.<type>.timeAcquiringMicros
除以locks.<type>.acquireWaitCount得出特定锁模式的大致平均等待时间。
locks.ReplicationStateTransition.timeAcquiringMicros.w: 代表意图独占 (IX) 锁。
locks.ReplicationStateTransition.timeAcquiringMicros.W: 代表独占 (X) 锁。
locks.Global: 代表全局锁定。
locks.Global.acquireCount: 在指定模式下获取锁的次数。
locks.Global.acquireCount.r: 代表意向共享(IS)锁。
locks.Global.acquireCount.w: 代表意图独占 (IX) 锁。
locks.Global.acquireCount.W: 代表独占 (X) 锁。
locks.Global.acquireWaitCount
locks.Global.acquireWaitCount.r: 代表意向共享(IS)锁。
locks.Global.timeAcquiringMicros: 锁获取的累积等待时间(以微秒为单位)。locks.<type>.timeAcquiringMicros
除以locks.<type>.acquireWaitCount得出特定锁模式的大致平均等待时间。
locks.Global.timeAcquiringMicros.r: 代表意向共享(IS)锁。
locks.Database: 代表数据库锁。
locks.Database.acquireCount: 在指定模式下获取锁的次数。
locks.Database.acquireCount.r: 代表意向共享(IS)锁。
locks.Database.acquireCount.w: 代表意图独占 (IX) 锁。
locks.Database.acquireCount.W: 代表独占 (X) 锁。
locks.Collection: 代表集合锁。
locks.Collection.acquireCount:在指定模式下获取锁的次数。
locks.Collection.acquireCount.r: 代表意向共享(IS)锁。
locks.Collection.acquireCount.w: 代表意图独占 (IX) 锁。
locks.Collection.acquireCount.R: 代表共享(S)锁。
locks.Collection.acquireCount.W: 代表独占 (X) 锁。
locks.Mutex: 代表互斥锁。
locks.Mutex.acquireCount:在指定模式下获取锁的次数。
locks.Mutex.acquireCount.r: 代表意向共享(IS)锁。
locks.oplog: 表示 oplog
locks.oplog.acquireCount:在指定模式下获取锁的次数。
locks.oplog.acquireCount.r: 代表意向共享(IS)锁。
locks.oplog.acquireCount.w: 代表意图独占 (IX) 锁。
:param mongodb_status_dict:
:return:
"""
mongodb_locks = mongodb_status_dict['locks']
return json.dumps(mongodb_locks)
@staticmethod
def mongodb_status_logicalSessionRecordCache(mongodb_status_dict):
"""
logicalSessionRecordCache: 提供有关服务器会话
logicalSessionRecordCache.activeSessionsCount: 自上次刷新周期以来,mongod 或 mongos 实例在内存中缓存的所有活动本地会话的数量。
logicalSessionRecordCache.sessionsCollectionJobCount: 跟踪刷新进程在 config.system.sessions 集合上运行的次数的数字。
logicalSessionRecordCache.lastSessionsCollectionJobDurationMillis: 上次刷新的时间长度(以毫秒为单位)。
logicalSessionRecordCache.lastSessionsCollectionJobTimestamp: 上次刷新的时间。
logicalSessionRecordCache.datetime.datetime:
logicalSessionRecordCache.lastSessionsCollectionJobEntriesRefreshed: 上次刷新期间刷新的会话数。
logicalSessionRecordCache.lastSessionsCollectionJobEntriesEnded: 上次刷新期间结束的会话数。
logicalSessionRecordCache.lastSessionsCollectionJobCursorsClosed: 上次刷新 config.system.sessions 集合时关闭的游标数。
logicalSessionRecordCache.transactionReaperJobCount: 跟踪事务记录清理过程在 config.transactions 集合上运行次数的数字。
logicalSessionRecordCache.lastTransactionReaperJobDurationMillis: 上次清理事务记录的时间长度(以毫秒为单位)。
logicalSessionRecordCache.lastTransactionReaperJobTimestamp: 上次清理事务记录的时间。
logicalSessionRecordCache.lastTransactionReaperJobEntriesCleanedUp: 在上次清理事务记录期间删除的 config.transactions
集合中的条目数。
logicalSessionRecordCache.sessionCatalogSize:对于 mongod 实例:config.transactions 条目的内存缓存的大小。这对应于会话在
localLogicalSessionTimeoutMinutes 内未过期的可重试写入或事务。
对于 mongos 实例:包含最近 localLogicalSessionTimeoutMinutes 间隔内的事务的会话的内存缓存数量。
:param mongodb_status_dict:
:return:
"""
mongodb_logicalSessionRecordCache = str(mongodb_status_dict['logicalSessionRecordCache'])
return json.dumps(mongodb_logicalSessionRecordCache)
@staticmethod
def mongodb_status_network(mongodb_status_dict):
"""
network: 报告 MongoDB 网络使用相关数据的文档。这些统计信息仅衡量入口连接,特别是 mongod 或 mongos 通过客户端或其他 mongod 或
mongos 实例发起的网络连接看到的流量。由该 mongod 或 mongos 实例启动的网络连接(特别是出口连接)产生的流量值不 包含在这些统计信息中。
network.bytesIn: 服务器通过客户端或其他收到或mongod实例发起的网络连接mongos的逻辑字节总数。逻辑字节是给定文件包含的确切字节数。
network.bytesOut: 服务器通过客户端或其他发送或mongod实例启动的网络连接mongos的逻辑字节总数。逻辑字节对应于给定文件包含的字节数。
network.physicalBytesIn: 服务器通过客户端或其他收到或mongod实例发起的网络连接mongos的物理字节总数。物理字节是实际驻留在磁盘上的字节数。
network.physicalBytesOut: 服务器通过客户端或其他发送或mongod实例发起的网络连接mongos的物理字节总数。物理字节是实际驻留在磁盘上的字节数。
network.numSlowDNSOperations: 耗时超过 1 秒的 DNS 解析操作的总数。
network.numSlowSSLOperations: 用时超过 1 秒的 SSL 握手操作总数。
network.numRequests: 服务器收到的不同请求总数。使用此值为network.bytesIn和network.bytesOut值提供上下文,以确保 MongoDB 的
网络利用率与预期和应用程序使用情况一致。
network.tcpFastOpen: 一份报告有关 MongoDB 支持和使用 TCP 快速打开 (TFO) 连接的数据的文档。
network.tcpFastOpen.kernelSetting: 仅 Linux
返回 /proc/sys/net/ipv4/tcp_fastopen 的值:
0 - 系统已禁用 TCP 快速打开。
1 - 为传出连接启用 TCP 快速打开。
2 - 为传入连接启用 TCP 快速打开。
3 - 为传入和传出连接启用“TCP 快速打开”
network.tcpFastOpen.serverSupported: 如果主机操作系统支持入站 TCP 快速打开 (TFO) 连接,则返回 true。
如果主机操作系统不支持入站 TCP 快速打开 (TFO) 连接,则返回 false。
network.tcpFastOpen.clientSupported: 如果主机操作系统支持出站 TCP 快速打开 (TFO) 连接,则返回 true。
如果主机操作系统不支持出站 TCP 快速打开 (TFO) 连接,则返回 false。
network.tcpFastOpen.accepted: 自 mongod 或 mongos 上次启动以来已接受的到 mongod 或 mongos 的传入 TCP 快速打开 (TFO) 连接总数。
network.compression: 一个文档,其中报告每个网络压缩程序库压缩和解压缩的数据量。
network.compression.snappy: 一个文档,返回有关使用 snappy 库压缩和解压缩的字节数的统计信息。
network.compression.snappy.compressor
network.compression.snappy.compressor.bytesIn
network.compression.snappy.compressor.bytesOut
network.compression.snappy.decompressor
network.compression.snappy.decompressor.bytesIn
network.compression.snappy.decompressor.bytesOut
network.compression.zstd: 一个文档,返回有关使用 zstd 库压缩和解压缩的字节数的统计信息。
network.compression.zstd.compressor
network.compression.zstd.compressor.bytesIn
network.compression.zstd.compressor.bytesOut
network.compression.zstd.decompressor
network.compression.zstd.decompressor.bytesIn
network.compression.zstd.decompressor.bytesOut
network.compression.zlib: 一个文档,返回有关使用 zlib 库压缩和解压缩的字节数的统计信息。
network.compression.zlib.compressor
network.compression.zlib.compressor.bytesIn
network.compression.zlib.compressor.bytesOut
network.compression.zlib.decompressor
network.compression.zlib.decompressor.bytesIn
network.compression.zlib.decompressor.bytesIn
network.serviceExecutorTaskStats
network.serviceExecutorTaskStats.executor
network.serviceExecutorTaskStats.threadsRunning
:param mongodb_status_dict:
:return:
"""
mongodb_network = mongodb_status_dict['network']
return json.dumps(mongodb_network)
@staticmethod
def mongodb_status_opLatencies(mongodb_status_dict):
"""
opLatencies:包含整个实例的操作延迟。有关此文档的说明,请参阅latencyStats文档。从 MongoDB6 2opLatenciesmongodmongos开始。
mongos 和 实例的 指标报告。 报告的延迟包括操作延迟时间以及mongod 和mongos 实例之间的通信时间。
opLatencies.reads: 读取请求的延迟统计信息。
opLatencies.reads.latency:
opLatencies.reads.ops:
opLatencies.writes: 写入操作的延迟统计信息。
opLatencies.writes.latency:
opLatencies.writes.ops:
opLatencies.commands: 数据库命令的延迟统计信息。
opLatencies.commands.latency:
opLatencies.commands.ops:
opLatencies.transactions: 数据库事务的延迟统计信息。
opLatencies.transactions.latency:
opLatencies.transactions.ops:
:param mongodb_status_dict:
:return:
"""
mongodb_opLatencies = mongodb_status_dict['opLatencies']
return json.dumps(mongodb_opLatencies)
@staticmethod
def mongodb_status_opReadConcernCounters(mongodb_status_dict):
"""
opReadConcernCounters:一个文档,其中报告自 实例上次启动以来,查询操作为该实例指定的。
opReadConcernCounters.available: 指定读关注级别 "available"
opReadConcernCounters.linearizable: 指定读关注级别 "linearizable"
opReadConcernCounters.local :指定读关注级别 "local"
opReadConcernCounters.majority: 指定读关注级别 "local"
opReadConcernCounters.snapshot: 指定读关注级别 "snapshot"
opReadConcernCounters.none: 未指定读关注级别而是使用默认读关注级别
:param mongodb_status_dict:
:return:
"""
mongodb_opReadConcernCounters = mongodb_status_dict['opReadConcernCounters']
return json.dumps(mongodb_opReadConcernCounters)
@staticmethod
def mongodb_status_opcounters(mongodb_status_dict):
"""
opcounters: 自 mongod 实例上次启动以来的数据库操作。这些数字将随着时间的推移而增长,直到下一次重新启动。随时间推移分析
这些值,跟踪数据库的使用情况。
opcounters中的数据将影响多个文档的操作(例如批量插入或多重更新操作)视为单个操作。有关更细粒度的文档级操作跟踪,请参阅
metrics.document 。此外,这些值还反映接收到的操作,即使操作不成功,也会递增。
opcounters.insert: 自 mongod 实例上次启动以来收到的插入操作总数。
opcounters.query: 自 mongod 实例上次启动以来收到的查询总数。从 MongoDB 7.1 开始,聚合算作查询操作,并递增该值。
opcounters.update: 自 mongod 实例上次启动以来收到的更新操作总数。
opcounters.delete: 自 mongod 实例上次启动以来的删除操作总数。
opcounters.getmore: 自 mongod 实例上次启动以来 getMore 操作的总数。即使查询计数较低,此计数器读数也可能很高。
从节点发送 getMore 操作,作为复制进程的一部分。
opcounters.command: 自 mongod 实例上次启动以来向数据库发出的命令总数。
:param mongodb_status_dict:
:return:
"""
mongodb_opcounters = mongodb_status_dict['opcounters']
return json.dumps(mongodb_opcounters)
@staticmethod
def mongodb_status_opcountersRepl(mongodb_status_dict):
"""
opcountersRepl:自 mongod 实例上次启动以来的数据库复制操作。这些值只有在当前主机是副本集成员时才会出现。由于 MongoDB 在复制期间序列
化操作的方式,这些值将与opcounters值不同。有关复制的更多信息,请参阅复制。这些数字将随时间推移而增长,以响应数据库的使用,直到下次重新启动。
随时间推移分析这些值,跟踪数据库的使用情况。返回的 opcountersRepl.* 值的类型为 NumberLong。
opcountersRepl。insert: 自 mongod 实例上次启动以来复制的插入操作总数。
opcountersRepl。query: 自 mongod 实例上次启动以来复制的查询总数.
opcountersRepl。update: 自 mongod 实例上次启动以来复制的更新操作总数。
opcountersRepl。delete: 自 mongod 实例上次启动以来复制的删除操作总数。
opcountersRepl。getmore: 自 mongod 实例上次启动以来 getMore 操作的总数。即使查询计数较低,此计数器读数也可能很高。从节点发送
getMore 操作,作为复制进程的一部分。
opcountersRepl。command: 自 mongod 实例上次启动以来向数据库发出的已复制命令总数。
:param mongodb_status_dict:
:return:
"""
mongodb_opcountersRepl = mongodb_status_dict['opcountersRepl']
return json.dumps(mongodb_opcountersRepl)
@staticmethod
def mongodb_status_oplogTruncation(mongodb_status_dict):
"""
oplogTruncation:报告 oplog 截断情况。该字段仅在当前实例是副本集节点并且使用 WiredTiger 存储引擎或内存存储引擎
可用于 WiredTiger存储引擎
oplogTruncation。totalTimeProcessingMicros: 扫描或采样 oplog 以确定 oplog 截断点所用的总时间(以微秒为单位)。
totalTimeProcessingMicros仅当mongod 实例在现有数据文件上启动时, 才有意义(即对 内存存储引擎 没有意义)。
请参阅oplogTruncation.processingMethod.
oplogTruncation。processingMethod: 启动时用于确定 oplog 截断点的方法。该值可以是 "sampling" 或 "scanning"。
processingMethod仅当mongod 实例在现有数据文件上启动时, 才有意义(即对 内存存储引擎 没有意义)。
oplogTruncation。totalTimeTruncatingMicros: 执行 oplog 截断所花费的累积时间,以微秒为单位。
oplogTruncation。truncateCount: oplog 截断的累积次数。
:param mongodb_status_dict:
:return:
"""
mongodb_oplogTruncation = mongodb_status_dict['oplogTruncation']
return json.dumps(mongodb_oplogTruncation)
@staticmethod
def mongodb_status_repl(mongodb_status_dict):
"""
repl:报告副本集配置的文档。 repl仅在当前主机是副本集时出现。有关复制的更多信息,请参阅复制。
repl.topologyVersion
repl.topologyVersion.processId
repl.topologyVersion.counter
repl.hosts: 当前副本集节点的主机名和端口信息 ("host:port") 的数组.
repl.arbiters
repl.setName: 一个体现当前副本集名称的字符串。该值反映了 --replSet 命令行参数或配置文件中的 replSetName 值。
repl.setVersion:
repl.ismaster:
repl.secondary: 一个布尔值,指示当前节点是否为副本集的从节点。
repl.primary: 副本集当前"host:port"节点的主机名和端口信息 (主)。
repl.me: 副本集当前节点的主机名和端口信息 ("host:port" )。
repl.electionId:
repl.lastWrite:
repl.lastWrite.opTime:
repl.lastWrite.opTime.ts:
repl.lastWrite.opTime.t:
repl.lastWriteDate:
repl.majorityOpTime:
repl.majorityOptime.ts:
repl.majorityOptime.t:
repl.majorityWriteDate:
repl.rbid: 回滚标识符。用于确定此 mongod 实例是否发生了回滚。
:param mongodb_status_dict:
:return:
"""
mongodb_repl = mongodb_status_dict['repl']
return json.dumps(mongodb_repl)
@staticmethod
def mongodb_status_scramCache(mongodb_status_dict):
"""
scramCache:
scramCache.SCRAM-SHA-1
scramCache.SCRAM-SHA-1.count
scramCache.SCRAM-SHA-1.hits
scramCache.SCRAM-SHA-1.misses
scramCache.SCRAM-SHA-256
scramCache.SCRAM-SHA-256.count
scramCache.SCRAM-SHA-256.hits
scramCache.SCRAM-SHA-256.misses
:param mongodb_status_dict:
:return:
"""
mongodb_scramCache = mongodb_status_dict['scramCache']
return json.dumps(mongodb_scramCache)
@staticmethod
def mongodb_status_security(mongodb_status_dict):
"""
security:一个文档,其中报告:使用给定身份验证机制针对 mongod 或 mongos 实例进行身份验证的次数。mongod / mongos 实例的 TLS/SSL
证书。(仅出现在支持 TLS 的 mongod 或 mongos 实例中)
security.authentication
security.authentication.mechanisms: 一个文档,其中报告使用给定身份验证机制对 mongod 或 mongos 实例进行身份验证的次数。
文档中的值区分标准身份验证和推测性身份验证。
mechanisms 文档中的字段取决于 authenticationMechanisms 参数的配置。mechanisms 文档包含 mongod 或 mongos 实例支持的每种身份
验证机制的字段。
security.authentication.mechanisms.MONGODB-X509: 一个文档,其中报告使用 x.509 对 mongod 或 mongos 实例进行身份验证的次数。
包括 x.509 身份验证尝试总次数以及推测性尝试子集。[1]
security.authentication.mechanisms.MONGODB-X509.speculativeAuthenticate
security.authentication.mechanisms.MONGODB-X509.speculativeAuthenticate.received:使用 x.509 收到的推测性身份验证尝试
次数。包括成功和失败的推测性身份验证尝试。
security.authentication.mechanisms.MONGODB-X509.speculativeAuthenticate.successful: 使用 x.509 成功收到的推测性身份验
证尝试次数。
security.authentication.mechanisms.MONGODB-X509.authenticate
security.authentication.mechanisms.MONGODB-X509.authenticate.received: 使用 x.509 收到的成功和失败的身份验证尝试次数。
此值包括使用 x.509 收到的推测性身份验证尝试。
security.authentication.mechanisms.MONGODB-X509.authenticate.successful: 使用 x.508 收到的成功身份验证尝试次数。此值包括
使用 x.509 的成功推测性身份验证尝试。
P.S. 推测性身份验证可最大程度地减少身份验证过程中的网络往返次数,从而优化性能。
security.authentication.mechanisms.SCRAM-SHA-1
security.authentication.mechanisms.SCRAM-SHA-1.speculativeAuthenticate
security.authentication.mechanisms.SCRAM-SHA-1.speculativeAuthenticate.received
security.authentication.mechanisms.SCRAM-SHA-1.speculativeAuthenticate.successful
security.authentication.mechanisms.SCRAM-SHA-1.authenticate
security.authentication.mechanisms.SCRAM-SHA-1.authenticate.received
security.authentication.mechanisms.SCRAM-SHA-1.authenticate.successful
security.authentication.mechanisms.SCRAM-SHA-256
security.authentication.mechanisms.SCRAM-SHA-256.speculativeAuthenticate
security.authentication.mechanisms.SCRAM-SHA-256.speculativeAuthenticate.received
security.authentication.mechanisms.SCRAM-SHA-256.speculativeAuthenticate.successful
security.authentication.mechanisms.SCRAM-SHA-256.authenticate
security.authentication.mechanisms.SCRAM-SHA-256.authenticate.received
security.authentication.mechanisms.SCRAM-SHA-256.authenticate.successful
:param mongodb_status_dict:
:return:
"""
mongodb_security = mongodb_status_dict['security']
return json.dumps(mongodb_security)
@staticmethod
def mongodb_status_storageEngine(mongodb_status_dict):
"""
storageEngine: 包含有关当前存储引擎的数据的文档。
storageEngine.name: 当前存储引擎的名称。
storageEngine.supportsCommittedReads: 一个布尔值,指示存储引擎是否支持 "majority".
storageEngine.oldestRequiredTimestampForCrashRecovery
storageEngine.supportsPendingDrops
storageEngine.dropPendingIdents
storageEngine.supportsTwoPhaseIndexBuild
storageEngine.supportsSnapshotReadConcern
storageEngine.readOnly
storageEngine.persistent: 一个布尔值,指示存储引擎是否将数据持久存储到磁盘。
storageEngine.backupCursorOpen
:param mongodb_status_dict:
:return:
"""
mongodb_storageEngine = mongodb_status_dict['storageEngine']
return json.dumps(mongodb_storageEngine)
@staticmethod
def mongodb_status_tcmalloc(mongodb_status_dict):
"""
tcmalloc:
tcmalloc.generic
tcmalloc.generic.current_allocated_bytes
tcmalloc.generic.heap_size
tcmalloc.tcmalloc
tcmalloc.tcmalloc.pageheap_free_bytes
tcmalloc.tcmalloc.pageheap_unmapped_bytes
tcmalloc.tcmalloc.max_total_thread_cache_bytes
tcmalloc.tcmalloc.current_total_thread_cache_bytes
tcmalloc.tcmalloc.total_free_bytes
tcmalloc.tcmalloc.central_cache_free_bytes
tcmalloc.tcmalloc.transfer_cache_free_bytes
tcmalloc.tcmalloc.thread_cache_free_bytes
tcmalloc.tcmalloc.aggressive_memory_decommit
tcmalloc.tcmalloc.pageheap_committed_bytes
tcmalloc.tcmalloc.pageheap_scavenge_count
tcmalloc.tcmalloc.pageheap_commit_count
tcmalloc.tcmalloc.pageheap_total_commit_bytes
tcmalloc.tcmalloc.pageheap_decommit_count
tcmalloc.tcmalloc.pageheap_total_decommit_bytes
tcmalloc.tcmalloc.pageheap_reserve_count
tcmalloc.tcmalloc.pageheap_total_reserve_bytes
tcmalloc.tcmalloc.spinlock_total_delay_ns
tcmalloc.tcmalloc.release_rate
tcmalloc.tcmalloc.formattedString: 该数据较长,如果需要保存到数据库,请注意数据库值的宽度。
:param mongodb_status_dict:
:return:
"""
mongodb_tcmalloc = mongodb_status_dict['tcmalloc']
return json.dumps(mongodb_tcmalloc)
@staticmethod
def mongodb_status_trafficRecording(mongodb_status_dict):
"""
trafficRecording:
trafficRecording.running:
:param mongodb_status_dict:
:return:
"""
mongodb_trafficRecording = mongodb_status_dict['trafficRecording']
return json.dumps(mongodb_trafficRecording)
@staticmethod
def mongodb_status_transactions(mongodb_status_dict):
"""
transactions:在 mongod 上运行时,生成一个文档,其中包含有关可重试写入和事务.在 mongos 上运行时,生成一个文档,其中包含有关在实例上
运行的事务的数据。
transactions.retriedCommandsCount: 仅在 mongod 上可用。提交相应可重试写入命令后收到的重试总次数。也就是说,即使先前的写入已经成功,
并在 config.transactions 集合中拥有事务和会话的关联记录,仍会尝试可重试写入,例如,对客户端的初始写入响应丢失时。
transactions.retriedStatementsCount: 仅在 mongod 上可用。与transactions.retriedCommandsCount中的重试命令关联的写入语句总数。
transactions.transactionsCollectionWriteCount: 仅在 mongod 上可用。在提交新的可重试写入语句时触发的写入 config.transactions
集合的总次数。对于更新和删除命令,由于只能重试单个文档操作,因此每个语句都只有一次写入。对于插入操作,每批插入的文档只有一次写入,除非失败
导致单独插入每个文档。该总数包括在迁移过程中写入服务器的 config.transactions 集合的次数。
transactions.currentActive: 可在 mongod 和 mongos 上使用。当前正在执行命令的未结事务总数。
transactions.currentInactive: 可在 mongod 和 mongos 上使用。当前未执行命令的活动事务总数。
transactions.currentOpen: 可在 mongod 和 mongos 上使用。未结事务的总数。当第一个命令作为事务的一部分运行时,事务就会打开,并保持
打开状态,直到事务提交或中止。
transactions.totalAborted: 对于 mongod,指的是自上次启动以来在该实例上中止的事务总数。对于 mongos,指的是自上次启动以来通过该实例
中止的事务总数。
transactions.totalCommitted: 对于 mongod,指的是自上次启动以来在该实例上提交的事务总数。对于mongos,指的是自上次启动以来通过该实例
提交的事务总数。
transactions.totalStarted: 对于 mongod,指的是自上次启动以来在该实例上启动的事务总数。对于 mongos,指的是自上次启动以来在该实例上
启动的事务总数。
transactions.totalPrepared: 仅在 mongod 上可用。自 mongod 进程上次启动以来,此服务器上处于准备状态的事务总数。
transactions.totalPreparedThenCommitted: 仅在 mongod 上可用。自 mongod 进程上次启动以来,在此服务器上准备和提交的事务总数。
transactions.totalPreparedThenAborted: 仅在 mongod 上可用。自 mongod 进程上次启动以来,在此服务器上准备和中止的事务总数。
transactions.currentPrepared: 仅在 mongod 上可用。此服务器上当前处于准备状态的事务数量。
:param mongodb_status_dict:
:return:
"""
mongodb_transactions = mongodb_status_dict['transactions']
return json.dumps(mongodb_transactions)
@staticmethod
def mongodb_status_transportSecurity(mongodb_status_dict):
"""
transportSecurity: 与此 mongod 或 mongos 实例建立的 TLS
transportSecurity.1.0
transportSecurity.1.1
transportSecurity.1.2
transportSecurity.1.3
transportSecurity.unknown
:param mongodb_status_dict:
:return:
"""
mongodb_transportSecurity = mongodb_status_dict['transportSecurity']
return json.dumps(mongodb_transportSecurity)
@staticmethod
def mongodb_status_twoPhaseCommitCoordinator(mongodb_status_dict):
"""
twoPhaseCommitCoordinator:
twoPhaseCommitCoordinator.totalCreated
twoPhaseCommitCoordinator.totalStartedTwoPhaseCommit
twoPhaseCommitCoordinator.totalAbortedTwoPhaseCommit
twoPhaseCommitCoordinator.totalCommittedTwoPhaseCommit
twoPhaseCommitCoordinator.currentInSteps
twoPhaseCommitCoordinator.currentInSteps.writingParticipantList
twoPhaseCommitCoordinator.currentInSteps.waitingForVotes
twoPhaseCommitCoordinator.currentInSteps.writingDecision
twoPhaseCommitCoordinator.currentInSteps.waitingForDecisionAcks
twoPhaseCommitCoordinator.currentInSteps.deletingCoordinatorDoc
:param mongodb_status_dict:
:return:
"""
mongodb_twoPhaseCommitCoordinator = mongodb_status_dict['twoPhaseCommitCoordinator']
return json.dumps(mongodb_twoPhaseCommitCoordinator)
@staticmethod
def mongodb_status_wiredTiger(mongodb_status_dict):
"""
因为wiredTiger的参数太多了,所以根据wiredTiger的二级值进行了划分。
wiredTiger: 只有在使用 WiredTiger 存储引擎时才会显示.
wiredTiger.uri: 一个字符串,供 MongoDB 内部使用。
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger = mongodb_status_dict['wiredTiger']
return json.dumps(mongodb_wiredTiger)
@staticmethod
def mongodb_status_wiredTiger_block_manager(mongodb_status_dict):
"""
wiredTiger.block-manager: 一个文档,用于返回区块管理器操作的统计信息。
wiredTiger.block-manager.blocks.pre-loaded
wiredTiger.block-manager.blocks.read
wiredTiger.block-manager.blocks.written
wiredTiger.block-manager.bytes.read
wiredTiger.block-manager.bytes.read via memory map API
wiredTiger.block-manager.bytes.read via system call API
wiredTiger.block-manager.bytes.written
wiredTiger.block-manager.bytes written for checkpoint
wiredTiger.block-manager.bytes written via memory map API
wiredTiger.block-manager.bytes written via system call API
wiredTiger.block-manager.mapped blocks read
wiredTiger.block-manager.mapped bytes read
wiredTiger.block-manager.number of times the file was remapped because it changed size via fallocate or truncate
wiredTiger.block-manager.number of times the region was remapped via write
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_block_manager = mongodb_status_dict['wiredTiger']['block-manager']
return json.dumps(mongodb_wiredTiger_block_manager)
@staticmethod
def mongodb_status_wiredTiger_cache(mongodb_status_dict):
"""
wiredTiger.cache: 返回有关缓存和从缓存中逐出页面的统计数据的文档。
wiredTiger.cache.application threads page read from disk to cache count
wiredTiger.cache.application threads page read from disk to cache time (usecs)
wiredTiger.cache.application threads page write from cache to disk count
wiredTiger.cache.application threads page write from cache to disk time (usecs)
wiredTiger.cache.bytes allocated for updates
wiredTiger.cache.bytes belonging to page images in the cache
wiredTiger.cache.bytes belonging to the history store table in the cache
wiredTiger.cache.bytes currently in the cache: 当前缓存中数据的字节大小。此值不应大于 maximum bytes configured 值。
wiredTiger.cache.bytes dirty in the cache cumulative
wiredTiger.cache.bytes not belonging to page images in the cache
wiredTiger.cache.bytes read into cache
wiredTiger.cache.bytes written from cache
wiredTiger.cache.cache overflow score
wiredTiger.cache.checkpoint blocked page eviction
wiredTiger.cache.checkpoint of history store file blocked non-history store page eviction
wiredTiger.cache.eviction calls to get a page
wiredTiger.cache.eviction calls to get a page found queue empty
wiredTiger.cache.eviction calls to get a page found queue empty after locking
wiredTiger.cache.eviction currently operating in aggressive mode
wiredTiger.cache.eviction empty score
wiredTiger.cache.eviction gave up due to detecting an out of order on disk value behind the last update on the
chain
wiredTiger.cache.eviction gave up due to detecting an out of order tombstone ahead of the selected on disk
update
wiredTiger.cache.eviction gave up due to detecting an out of order tombstone ahead of the selected on disk
update
after validating the update chain
wiredTiger.cache.eviction gave up due to detecting out of order timestamps on the update chain after the
selected
on disk update
wiredTiger.cache.eviction passes of a file
wiredTiger.cache.eviction server candidate queue empty when topping up
wiredTiger.cache.eviction server candidate queue not empty when topping up
wiredTiger.cache.eviction server evicting pages
wiredTiger.cache.eviction server slept, because we did not make progress with eviction
wiredTiger.cache.eviction server unable to reach eviction goal
wiredTiger.cache.eviction server waiting for a leaf page
wiredTiger.cache.eviction state
wiredTiger.cache.eviction walk most recent sleeps for checkpoint handle gathering
wiredTiger.cache.eviction walk target pages histogram - 0-9
wiredTiger.cache.eviction walk target pages histogram - 10-31
wiredTiger.cache.eviction walk target pages histogram - 128 and higher
wiredTiger.cache.eviction walk target pages histogram - 32-63
wiredTiger.cache.eviction walk target pages histogram - 64-128
wiredTiger.cache.eviction walk target pages reduced due to history store cache pressure
wiredTiger.cache.eviction walk target strategy both clean and dirty pages
wiredTiger.cache.eviction walk target strategy only clean pages
wiredTiger.cache.eviction walk target strategy only dirty pages
wiredTiger.cache.eviction walks gave up because they restarted their walk twice
wiredTiger.cache.eviction walks gave up because they saw too many pages and found no candidates
wiredTiger.cache.eviction walks gave up because they saw too many pages and found too few candidates
wiredTiger.cache.eviction walks reached end of tree
wiredTiger.cache.eviction walks restarted
wiredTiger.cache.eviction walks started from root of tree
wiredTiger.cache.eviction walks started from saved location in tree
wiredTiger.cache.eviction worker thread active
wiredTiger.cache.eviction worker thread created
wiredTiger.cache.eviction worker thread evicting pages
wiredTiger.cache.eviction worker thread removed
wiredTiger.cache.eviction worker thread stable number
wiredTiger.cache.files with active eviction walks
wiredTiger.cache.files with new eviction walks started
wiredTiger.cache.force re-tuning of eviction workers once in a while
wiredTiger.cache.forced eviction - history store pages failed to evict while session has history store
cursor open
wiredTiger.cache.forced eviction - history store pages selected while session has history store cursor open
wiredTiger.cache.forced eviction - history store pages successfully evicted while session has history store
cursor open
wiredTiger.cache.forced eviction - pages evicted that were clean count
wiredTiger.cache.forced eviction - pages evicted that were clean time (usecs)
wiredTiger.cache.forced eviction - pages evicted that were dirty count
wiredTiger.cache.forced eviction - pages evicted that were dirty time (usecs)
wiredTiger.cache.forced eviction - pages selected because of a large number of updates to a single item
wiredTiger.cache.forced eviction - pages selected because of too many deleted items count
wiredTiger.cache.forced eviction - pages selected count
wiredTiger.cache.forced eviction - pages selected unable to be evicted count
wiredTiger.cache.forced eviction - pages selected unable to be evicted time
wiredTiger.cache.hazard pointer blocked page eviction
wiredTiger.cache.hazard pointer check calls
wiredTiger.cache.hazard pointer check entries walked
wiredTiger.cache.hazard pointer maximum array length
wiredTiger.cache.history store score
wiredTiger.cache.history store table insert calls
wiredTiger.cache.history store table insert calls that returned restart
wiredTiger.cache.history store table max on-disk size
wiredTiger.cache.history store table on-disk size
wiredTiger.cache.history store table out-of-order resolved updates that lose their durable timestamp
wiredTiger.cache.history store table out-of-order updates that were fixed up by reinserting with the
fixed timestamp
wiredTiger.cache.history store table reads
wiredTiger.cache.history store table reads missed
wiredTiger.cache.history store table reads requiring squashed modifies
wiredTiger.cache.history store table truncation by rollback to stable to remove an unstable update
wiredTiger.cache.history store table truncation by rollback to stable to remove an update
wiredTiger.cache.history store table truncation to remove an update
wiredTiger.cache.history store table truncation to remove range of updates due to key being removed from
the data page during reconciliation
wiredTiger.cache.history store table truncation to remove range of updates due to out-of-order timestamp update
on data page
wiredTiger.cache.history store table writes requiring squashed modifies
wiredTiger.cache.in-memory page passed criteria to be split
wiredTiger.cache.in-memory page splits
wiredTiger.cache.internal pages evicted
wiredTiger.cache.internal pages queued for eviction
wiredTiger.cache.internal pages seen by eviction walk
wiredTiger.cache.internal pages seen by eviction walk that are already queued
wiredTiger.cache.internal pages split during eviction
wiredTiger.cache.leaf pages split during eviction
wiredTiger.cache.maximum bytes configured: 最大缓存大小。
wiredTiger.cache.maximum page size at eviction
wiredTiger.cache.modified pages evicted
wiredTiger.cache.modified pages evicted by application threads
wiredTiger.cache.operations timed out waiting for space in cache
wiredTiger.cache.overflow pages read into cache
wiredTiger.cache.page split during eviction deepened the tree
wiredTiger.cache.page written requiring history store records
wiredTiger.cache.pages currently held in the cache
wiredTiger.cache.pages evicted by application threads
wiredTiger.cache.pages evicted in parallel with checkpoint
wiredTiger.cache.pages queued for eviction
wiredTiger.cache.pages queued for eviction post lru sorting
wiredTiger.cache.pages queued for urgent eviction
wiredTiger.cache.pages queued for urgent eviction during walk
wiredTiger.cache.pages queued for urgent eviction from history store due to high dirty content
wiredTiger.cache.pages read into cache: 读入缓存的页数。 wiredTiger.cache.pages read into cache和wiredTiger.cache.
pages written from cache可以提供 I/O 活动的概述。
wiredTiger.cache.pages read into cache after truncate
wiredTiger.cache.pages read into cache after truncate in prepare state
wiredTiger.cache.pages requested from the cache
wiredTiger.cache.pages seen by eviction walk
wiredTiger.cache.pages seen by eviction walk that are already queued
wiredTiger.cache.pages selected for eviction unable to be evicted
wiredTiger.cache.pages selected for eviction unable to be evicted as the parent page has overflow items
wiredTiger.cache.pages selected for eviction unable to be evicted because of active children on an internal page
wiredTiger.cache.pages selected for eviction unable to be evicted because of failure in reconciliation
wiredTiger.cache.pages selected for eviction unable to be evicted because of race between checkpoint and out of
order timestamps handling
wiredTiger.cache.pages walked for eviction': 0, 'pages written from cache
wiredTiger.cache.pages written requiring in-memory restoration
wiredTiger.cache.percentage overhead
wiredTiger.cache.the number of times full update inserted to history store
wiredTiger.cache.the number of times reverse modify inserted to history store
wiredTiger.cache.tracked bytes belonging to internal pages in the cache
wiredTiger.cache.tracked bytes belonging to leaf pages in the cache
wiredTiger.cache.tracked dirty bytes in the cache: 缓存中脏数据的大小(字节)。该值应小于 bytes currently in the cache 值。
wiredTiger.cache.tracked dirty pages in the cache
wiredTiger.cache.unmodified pages evicted :页面驱逐的主要统计信息。
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_cache = mongodb_status_dict['wiredTiger']['cache']
return json.dumps(mongodb_wiredTiger_cache)
@staticmethod
def mongodb_status_wiredTiger_capacity(mongodb_status_dict):
"""
wiredTiger.capacity:
wiredTiger.capacity.background fsync file handles considered
wiredTiger.capacity.background fsync file handles synced
wiredTiger.capacity.background fsync time (msecs)
wiredTiger.capacity.bytes rea
wiredTiger.capacity.bytes written for checkpoint
wiredTiger.capacity.bytes written for eviction
wiredTiger.capacity.bytes written for log
wiredTiger.capacity.bytes written total
wiredTiger.capacity.threshold to call fsync
wiredTiger.capacity.time waiting due to total capacity (usecs)
wiredTiger.capacity.time waiting during checkpoint (usecs)
wiredTiger.capacity.time waiting during eviction (usecs)
wiredTiger.capacity.time waiting during logging (usecs)
wiredTiger.capacity.time waiting during read (usecs)
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_capacity = mongodb_status_dict['wiredTiger']['capacity']
return json.dumps(mongodb_wiredTiger_capacity)
@staticmethod
def mongodb_status_wiredTiger_checkpoint_cleanup(mongodb_status_dict):
"""
wiredTiger.checkpoint-cleanup
wiredTiger.checkpoint-cleanup.pages added for eviction
wiredTiger.checkpoint-cleanup.pages removed
wiredTiger.checkpoint-cleanup.pages skipped during tree walk
wiredTiger.checkpoint-cleanup.pages visited
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_checkpoint_cleanup = mongodb_status_dict['wiredTiger']['checkpoint-cleanup']
return json.dumps(mongodb_wiredTiger_checkpoint_cleanup)
@staticmethod
def mongodb_status_wiredTiger_connection(mongodb_status_dict):
"""
wiredTiger.connection: 返回与 WiredTiger 连接相关的统计信息。
wiredTiger.connection.auto adjusting condition resets
wiredTiger.connection.auto adjusting condition wait calls
wiredTiger.connection.auto adjusting condition wait raced to update timeout and skipped updating
wiredTiger.connection.detected system time went backwards
wiredTiger.connection.files currently open
wiredTiger.connection.hash bucket array size for data handles
wiredTiger.connection.hash bucket array size general
wiredTiger.connection.memory allocations
wiredTiger.connection.memory frees
wiredTiger.connection.memory re-allocations
wiredTiger.connection.pthread mutex condition wait calls
wiredTiger.connection.pthread mutex shared lock read-lock calls
wiredTiger.connection.'pthread mutex shared lock write-lock calls
wiredTiger.connection.total fsync I/Os
wiredTiger.connection.total read I/Os
wiredTiger.connection.total write I/Os
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_connection = mongodb_status_dict['wiredTiger']['connection']
return json.dumps(mongodb_wiredTiger_connection)
@staticmethod
def mongodb_status_wiredTiger_cursor(mongodb_status_dict):
"""
wiredTiger.cursor: 返回 WiredTiger 游标的统计信息。
wiredTiger.cursor.Total number of entries skipped by cursor next calls
wiredTiger.cursor.Total number of entries skipped by cursor prev calls
wiredTiger.cursor.Total number of entries skipped to position the history store cursor
wiredTiger.cursor.Total number of times a search near has exited due to prefix config
wiredTiger.cursor.cached cursor count
wiredTiger.cursor.cursor bulk loaded cursor insert calls
wiredTiger.cursor.cursor close calls that result in cache
wiredTiger.cursor.cursor create calls
wiredTiger.cursor.cursor insert calls
wiredTiger.cursor.cursor insert key and value bytes
wiredTiger.cursor.cursor modify calls
wiredTiger.cursor.cursor modify key and value bytes affected
wiredTiger.cursor.cursor modify value bytes modified
wiredTiger.cursor.cursor next calls
wiredTiger.cursor.cursor next calls that skip due to a globally visible history store tombstone
wiredTiger.cursor.cursor next calls that skip greater than or equal to 100 entries
wiredTiger.cursor.cursor next calls that skip less than 100 entries
wiredTiger.cursor.cursor operation restarted
wiredTiger.cursor.cursor prev calls
wiredTiger.cursor.cursor prev calls that skip due to a globally visible history store tombstone
wiredTiger.cursor.cursor prev calls that skip greater than or equal to 100 entries
wiredTiger.cursor.cursor prev calls that skip less than 100 entries
wiredTiger.cursor.cursor remove calls
wiredTiger.cursor.cursor remove key bytes removed
wiredTiger.cursor.cursor reserve calls
wiredTiger.cursor.cursor reset calls
wiredTiger.cursor.cursor search calls
wiredTiger.cursor.cursor search history store calls
wiredTiger.cursor.cursor search near calls
wiredTiger.cursor.cursor sweep buckets
wiredTiger.cursor.cursor sweep cursors closed
wiredTiger.cursor.cursor sweep cursors examined
wiredTiger.cursor.cursor sweeps
wiredTiger.cursor.cursor truncate calls
wiredTiger.cursor.cursor update calls
wiredTiger.cursor.cursor update key and value bytes
wiredTiger.cursor.cursor update value size change
wiredTiger.cursor.cursors reused from cache
wiredTiger.cursor.open cursor count
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_cursor = mongodb_status_dict['wiredTiger']['cursor']
return json.dumps(mongodb_wiredTiger_cursor)
@staticmethod
def mongodb_status_wiredTiger_data_handle(mongodb_status_dict):
"""
wiredTiger.data-handle: 返回数据句柄和扫描的统计信息。
wiredTiger.data-handle.connection data handle size
wiredTiger.data-handle.connection data handles currently active
wiredTiger.data-handle.connection sweep candidate became referenced
wiredTiger.data-handle.connection sweep dhandles closed
wiredTiger.data-handle.connection sweep dhandles removed from hash list
wiredTiger.data-handle.connection sweep time-of-death sets
wiredTiger.data-handle.connection sweeps
wiredTiger.data-handle.connection sweeps skipped due to checkpoint gathering handles
wiredTiger.data-handle.session dhandles swept
wiredTiger.data-handle.session sweep attempts
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_data_handle = mongodb_status_dict['wiredTiger']['data-handle']
return json.dumps(mongodb_wiredTiger_data_handle)
@staticmethod
def mongodb_status_wiredTiger_lock(mongodb_status_dict):
"""
wiredTiger.lock
wiredTiger.lock.checkpoint lock acquisitions
wiredTiger.lock.checkpoint lock application thread wait time (usecs)
wiredTiger.lock.checkpoint lock internal thread wait time (usecs)
wiredTiger.lock.dhandle lock application thread time waiting (usecs)
wiredTiger.lock.dhandle lock internal thread time waiting (usecs)
wiredTiger.lock.dhandle read lock acquisitions
wiredTiger.lock.dhandle write lock acquisitions
wiredTiger.lock.durable timestamp queue lock application thread time waiting (usecs)
wiredTiger.lock.durable timestamp queue lock internal thread time waiting (usecs)
wiredTiger.lock.durable timestamp queue read lock acquisitions
wiredTiger.lock.durable timestamp queue write lock acquisitions
wiredTiger.lock.metadata lock acquisitions
wiredTiger.lock.metadata lock application thread wait time (usecs)
wiredTiger.lock.metadata lock internal thread wait time (usecs)
wiredTiger.lock.read timestamp queue lock application thread time waiting (usecs)
wiredTiger.lock.read timestamp queue lock internal thread time waiting (usecs)
wiredTiger.lock.read timestamp queue read lock acquisitions
wiredTiger.lock.read timestamp queue write lock acquisitions
wiredTiger.lock.schema lock acquisitions
wiredTiger.lock.schema lock application thread wait time (usecs)
wiredTiger.lock.schema lock internal thread wait time (usecs)
wiredTiger.lock.table lock application thread time waiting for the table lock (usecs)
wiredTiger.lock.table lock internal thread time waiting for the table lock (usecs)
wiredTiger.lock.table read lock acquisitions
wiredTiger.lock.table write lock acquisitions
wiredTiger.lock.txn global lock application thread time waiting (usecs)
wiredTiger.lock.txn global lock internal thread time waiting (usecs)
wiredTiger.lock.txn global read lock acquisitions
wiredTiger.lock.txn global write lock acquisitions
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_lock = mongodb_status_dict['wiredTiger']['lock']
return json.dumps(mongodb_wiredTiger_lock)
@staticmethod
def mongodb_status_wiredTiger_log(mongodb_status_dict):
"""
wiredTiger.log: 返回 WiredTiger 预写日志(即日志)统计数据
wiredTiger.log.busy returns attempting to switch slots
wiredTiger.log.force archive time sleeping (usecs)
wiredTiger.log.log bytes of payload data
wiredTiger.log.log bytes written
wiredTiger.log.log files manually zero-filled
wiredTiger.log.log flush operations
wiredTiger.log.log force write operations
wiredTiger.log.log force write operations skipped
wiredTiger.log.log records compressed
wiredTiger.log.log records not compressed
wiredTiger.log.log records too small to compress
wiredTiger.log.log release advances write LSN
wiredTiger.log.log scan operations
wiredTiger.log.log scan records requiring two reads
wiredTiger.log.log server thread advances write LSN
wiredTiger.log.log server thread write LSN walk skipped
wiredTiger.log.log sync operations
wiredTiger.log.log sync time duration (usecs)
wiredTiger.log.log sync_dir operations
wiredTiger.log.log sync_dir time duration (usecs)
wiredTiger.log.log write operations
wiredTiger.log.logging bytes consolidate
wiredTiger.log.'maximum log file size
wiredTiger.log.number of pre-allocated log files to create
wiredTiger.log.pre-allocated log files not ready and missed
wiredTiger.log.pre-allocated log files prepared
wiredTiger.log.pre-allocated log files used
wiredTiger.log.records processed by log scan
wiredTiger.log.slot close lost race
wiredTiger.log.slot close unbuffered waits
wiredTiger.log.slot closures
wiredTiger.log.slot join atomic update races
wiredTiger.log.slot join calls atomic updates raced
wiredTiger.log.slot join calls did not yield
wiredTiger.log.slot join calls found active slot closed
wiredTiger.log.slot join calls slept': 0, 'slot join calls yielded
wiredTiger.log.slot join found active slot closed
wiredTiger.log.slot joins yield time (usecs)
wiredTiger.log.slot transitions unable to find free slot
wiredTiger.log.slot unbuffered writes
wiredTiger.log.total in-memory size of compressed records
wiredTiger.log.total log buffer size
wiredTiger.log.total size of compressed records
wiredTiger.log.written slots coalesced
wiredTiger.log.yields waiting for previous log file close
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_log = mongodb_status_dict['wiredTiger']['log']
return json.dumps(mongodb_wiredTiger_log)
@staticmethod
def mongodb_status_wiredTiger_perf(mongodb_status_dict):
"""
wiredTiger.perf
wiredTiger.perf.file system read latency histogram (bucket 1) - 10-49ms
wiredTiger.perf.file system read latency histogram (bucket 2) - 50-99ms
wiredTiger.perf.file system read latency histogram (bucket 3) - 100-249ms
wiredTiger.perf.file system read latency histogram (bucket 4) - 250-499ms
wiredTiger.perf.file system read latency histogram (bucket 5) - 500-999ms
wiredTiger.perf.file system read latency histogram (bucket 6) - 1000ms+
wiredTiger.perf.file system write latency histogram (bucket 1) - 10-49ms
wiredTiger.perf.file system write latency histogram (bucket 2) - 50-99ms
wiredTiger.perf.file system write latency histogram (bucket 3) - 100-249ms
wiredTiger.perf.file system write latency histogram (bucket 4) - 250-499ms
wiredTiger.perf.file system write latency histogram (bucket 5) - 500-999ms
wiredTiger.perf.file system write latency histogram (bucket 6) - 1000ms+
wiredTiger.perf.operation read latency histogram (bucket 1) - 100-249us
wiredTiger.perf.operation read latency histogram (bucket 2) - 250-499us
wiredTiger.perf.operation read latency histogram (bucket 3) - 500-999us
wiredTiger.perf.operation read latency histogram (bucket 4) - 1000-9999us
wiredTiger.perf.operation read latency histogram (bucket 5) - 10000us+
wiredTiger.perf.operation write latency histogram (bucket 1) - 100-249us
wiredTiger.perf.operation write latency histogram (bucket 2) - 250-499us
wiredTiger.perf.operation write latency histogram (bucket 3) - 500-999us
wiredTiger.perf.operation write latency histogram (bucket 4) - 1000-9999us
wiredTiger.perf.operation write latency histogram (bucket 5) - 10000us+
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_perf = mongodb_status_dict['wiredTiger']['perf']
return json.dumps(mongodb_wiredTiger_perf)
@staticmethod
def mongodb_status_wiredTiger_reconciliation(mongodb_status_dict):
"""
wiredTiger.reconciliation: 返回协调进程的统计信息。
wiredTiger.reconciliation.approximate byte size of timestamps in pages written
wiredTiger.reconciliation.approximate byte size of transaction IDs in pages written
wiredTiger.reconciliation.fast-path pages deleted
wiredTiger.reconciliation.internal-page overflow keys
wiredTiger.reconciliation.leaf-page overflow keys
wiredTiger.reconciliation.maximum seconds spent in a reconciliation call
wiredTiger.reconciliation.page reconciliation calls
wiredTiger.reconciliation.page reconciliation calls for eviction
wiredTiger.reconciliation.page reconciliation calls that resulted in values with prepared transaction metadata
wiredTiger.reconciliation.page reconciliation calls that resulted in values with timestamps
wiredTiger.reconciliation.page reconciliation calls that resulted in values with transaction ids
wiredTiger.reconciliation.pages deleted
wiredTiger.reconciliation.pages written including an aggregated newest start durable timestamp
wiredTiger.reconciliation.pages written including an aggregated newest stop durable timestamp
wiredTiger.reconciliation.pages written including an aggregated newest stop timestamp
wiredTiger.reconciliation.pages written including an aggregated newest stop transaction ID
wiredTiger.reconciliation.pages written including an aggregated newest transaction ID
wiredTiger.reconciliation.pages written including an aggregated oldest start timestamp
wiredTiger.reconciliation.pages written including an aggregated prepare
wiredTiger.reconciliation.pages written including at least one prepare state
wiredTiger.reconciliation.pages written including at least one start durable timestamp
wiredTiger.reconciliation.pages written including at least one start timestamp
wiredTiger.reconciliation.pages written including at least one start transaction ID
wiredTiger.reconciliation.pages written including at least one stop durable timestamp
wiredTiger.reconciliation.pages written including at least one stop timestamp
wiredTiger.reconciliation.pages written including at least one stop transaction ID
wiredTiger.reconciliation.records written including a prepare state
wiredTiger.reconciliation.records written including a start durable timestamp
wiredTiger.reconciliation.records written including a start timestamp
wiredTiger.reconciliation.records written including a start transaction ID
wiredTiger.reconciliation.records written including a stop durable timestamp
wiredTiger.reconciliation.records written including a stop timestamp
wiredTiger.reconciliation.records written including a stop transaction ID
wiredTiger.reconciliation.split bytes currently awaiting free
wiredTiger.reconciliation.split objects currently awaiting free
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_reconciliation = mongodb_status_dict['wiredTiger']['reconciliation']
return json.dumps(mongodb_wiredTiger_reconciliation)
@staticmethod
def mongodb_status_wiredTiger_session(mongodb_status_dict):
"""
wiredTiger.session: 返回会话的打开游标计数和打开会话计数
wiredTiger.session.attempts to remove a local object and the object is in use
wiredTiger.session.flush_tier operation calls
wiredTiger.session.local objects removed
wiredTiger.session.open session count
wiredTiger.session.session query timestamp calls
wiredTiger.session.table alter failed calls
wiredTiger.session.table alter successful calls
wiredTiger.session.table alter triggering checkpoint calls
wiredTiger.session.table alter unchanged and skipped
wiredTiger.session.table compact failed calls
wiredTiger.session.table compact failed calls due to cache pressure
wiredTiger.session.table compact running
wiredTiger.session.table compact skipped as process would not reduce file size
wiredTiger.session.table compact successful calls
wiredTiger.session.table compact timeout
wiredTiger.session.table create failed calls
wiredTiger.session.table create successful calls
wiredTiger.session.table drop failed calls
wiredTiger.session.table drop successful calls
wiredTiger.session.table rename failed calls
wiredTiger.session.table rename successful calls
wiredTiger.session.table salvage failed calls
wiredTiger.session.table salvage successful calls
wiredTiger.session.table truncate failed calls
wiredTiger.session.table truncate successful calls
wiredTiger.session.table verify failed calls
wiredTiger.session.table verify successful calls
wiredTiger.session.tiered operations dequeued and processed
wiredTiger.session.tiered operations scheduled
wiredTiger.session.tiered storage local retention time (secs)
wiredTiger.session.tiered storage object size
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_session = mongodb_status_dict['wiredTiger']['session']
return json.dumps(mongodb_wiredTiger_session)
@staticmethod
def mongodb_status_wiredTiger_thread_state(mongodb_status_dict):
"""
wiredTiger.thread-state
wiredTiger.thread-state.active filesystem fsync calls
wiredTiger.thread-state.active filesystem read calls
wiredTiger.thread-state.active filesystem write calls
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_thread_state = mongodb_status_dict['wiredTiger']['thread-state']
return json.dumps(mongodb_wiredTiger_thread_state)
@staticmethod
def mongodb_status_wiredTiger_thread_yield(mongodb_status_dict):
"""
wiredTiger.thread-yield: 在页面获取期间返回让步统计信息
wiredTiger.thread-yield.application thread time evicting (usecs)
wiredTiger.thread-yield.application thread time waiting for cache (usecs)
wiredTiger.thread-yield.connection close blocked waiting for transaction state stabilization
wiredTiger.thread-yield.connection close yielded for lsm manager shutdown
wiredTiger.thread-yield.data handle lock yielded
wiredTiger.thread-yield.get reference for page index and slot time sleeping (usecs)
wiredTiger.thread-yield.page access yielded due to prepare state change
wiredTiger.thread-yield.page acquire busy blocked
wiredTiger.thread-yield.page acquire eviction blocked
wiredTiger.thread-yield.page acquire locked blocked
wiredTiger.thread-yield.page acquire read blocked
wiredTiger.thread-yield.page acquire time sleeping (usecs)
wiredTiger.thread-yield.page delete rollback time sleeping for state change (usecs)
wiredTiger.thread-yield.page reconciliation yielded due to child modification
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_thread_yield = mongodb_status_dict['wiredTiger']['thread-yield']
return json.dumps(mongodb_wiredTiger_thread_yield)
@staticmethod
def mongodb_status_wiredTiger_transaction(mongodb_status_dict):
"""
wiredTiger.transaction: 返回有关事务检查点和操作的统计信息
wiredTiger.transaction.Number of prepared updates
wiredTiger.transaction.Number of prepared updates committed
wiredTiger.transaction.Number of prepared updates repeated on the same key
wiredTiger.transaction.Number of prepared updates rolled back
wiredTiger.transaction.prepared transactions
wiredTiger.transaction.prepared transactions committed
wiredTiger.transaction.prepared transactions currently active
wiredTiger.transaction.prepared transactions rolled back
wiredTiger.transaction.prepared transactions rolled back and do not remove the history store entry
wiredTiger.transaction.query timestamp calls
wiredTiger.transaction.race to read prepared update retry
wiredTiger.transaction.rollback to stable calls
wiredTiger.transaction.rollback to stable history store records with stop timestamps older than newer records
wiredTiger.transaction.rollback to stable inconsistent checkpoint
wiredTiger.transaction.rollback to stable keys removed
wiredTiger.transaction.rollback to stable keys restored
wiredTiger.transaction.rollback to stable pages visited
wiredTiger.transaction.rollback to stable restored tombstones from history store
wiredTiger.transaction.rollback to stable restored updates from history store
wiredTiger.transaction.rollback to stable skipping delete rle
wiredTiger.transaction.rollback to stable skipping stable rle
wiredTiger.transaction.rollback to stable sweeping history store keys
wiredTiger.transaction.rollback to stable tree walk skipping pages
wiredTiger.transaction.rollback to stable updates aborted
wiredTiger.transaction.rollback to stable updates removed from history store
wiredTiger.transaction.sessions scanned in each walk of concurrent sessions
wiredTiger.transaction.set timestamp calls
wiredTiger.transaction.set timestamp durable calls
wiredTiger.transaction.set timestamp durable updates
wiredTiger.transaction.set timestamp oldest calls
wiredTiger.transaction.set timestamp oldest updates
wiredTiger.transaction.set timestamp stable calls
wiredTiger.transaction.set timestamp stable updates
wiredTiger.transaction.transaction begins
wiredTiger.transaction.transaction checkpoint currently running
wiredTiger.transaction.transaction checkpoint currently running for history store file
wiredTiger.transaction.transaction checkpoint generation
wiredTiger.transaction.transaction checkpoint history store file duration (usecs)
wiredTiger.transaction.transaction checkpoint max time (msecs)
wiredTiger.transaction.transaction checkpoint min time (msecs)
wiredTiger.transaction.transaction checkpoint most recent duration for gathering all handles (usecs)
wiredTiger.transaction.transaction checkpoint most recent duration for gathering applied handles (usecs)
wiredTiger.transaction.transaction checkpoint most recent duration for gathering skipped handles (usecs)
wiredTiger.transaction.transaction checkpoint most recent handles applied
wiredTiger.transaction.transaction checkpoint most recent handles skipped
wiredTiger.transaction.transaction checkpoint most recent handles walked
wiredTiger.transaction.transaction checkpoint most recent time (msecs): 创建最新检查点的时间量(以毫秒为单位)。在稳定的
写入负载下该值的增加可能表明 I/O 子系统已饱和。
wiredTiger.transaction.transaction checkpoint prepare currently running
wiredTiger.transaction.transaction checkpoint prepare max time (msecs)
wiredTiger.transaction.transaction checkpoint prepare min time (msecs)
wiredTiger.transaction.transaction checkpoint prepare most recent time (msecs)
wiredTiger.transaction.transaction checkpoint prepare total time (msecs)
wiredTiger.transaction.transaction checkpoint scrub dirty target
wiredTiger.transaction.transaction checkpoint scrub time (msecs)
wiredTiger.transaction.transaction checkpoint total time (msecs)
wiredTiger.transaction.transaction checkpoints
wiredTiger.transaction.transaction checkpoints due to obsolete pages
wiredTiger.transaction.transaction checkpoints skipped because database was clean
wiredTiger.transaction.transaction failures due to history store
wiredTiger.transaction.transaction fsync calls for checkpoint after allocating the transaction ID
wiredTiger.transaction.transaction fsync duration for checkpoint after allocating the transaction ID (usecs)
wiredTiger.transaction.transaction range of IDs currently pinned': 0, 'transaction range of IDs currently pinned
by a checkpoint
wiredTiger.transaction.transaction range of timestamps currently pinned
wiredTiger.transaction.transaction range of timestamps pinned by a checkpoint
wiredTiger.transaction.transaction range of timestamps pinned by the oldest active read timestamp
wiredTiger.transaction.transaction range of timestamps pinned by the oldest timestamp
wiredTiger.transaction.transaction read timestamp of the oldest active reader
wiredTiger.transaction.transaction rollback to stable currently running
wiredTiger.transaction.transaction walk of concurrent sessions
wiredTiger.transaction.transactions committed
wiredTiger.transaction.transactions rolled back
wiredTiger.transaction.update conflicts
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_transaction = mongodb_status_dict['wiredTiger']['transaction']
return json.dumps(mongodb_wiredTiger_transaction)
@staticmethod
def mongodb_status_wiredTiger_concurrentTransactions(mongodb_status_dict):
"""
wiredTiger.concurrentTransactions: 返回以下信息:
WiredTiger 存储引擎允许的并发读取事务(读取票证)的数量。
WiredTiger 存储引擎允许的并发写入事务(写入票证)的数量。
系统对允许的并发事务(票证)数量所做的任何调整。
这些设置专属于 MongoDB。要更改并发读取和写入事务(读取和写入票证)的设置,请参阅 storageEngineConcurrentReadTransactions 和
storageEngineConcurrentWriteTransactions
wiredTiger.concurrentTransactions.write
wiredTiger.concurrentTransactions.write.out
wiredTiger.concurrentTransactions.write.available
wiredTiger.concurrentTransactions.write.totalTickets
wiredTiger.concurrentTransactions.read
wiredTiger.concurrentTransactions.read.out
wiredTiger.concurrentTransactions.read.available
wiredTiger.concurrentTransactions.read.totalTickets
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_concurrentTransactions = mongodb_status_dict['wiredTiger']['concurrentTransactions']
return json.dumps(mongodb_wiredTiger_concurrentTransactions)
@staticmethod
def mongodb_status_wiredTiger_snapshot_window_settings(mongodb_status_dict):
"""
wiredTiger.snapshot-window-settings
wiredTiger.snapshot-window-settings.cache pressure percentage threshold
wiredTiger.snapshot-window-settings.current cache pressure percentage
wiredTiger.snapshot-window-settings.total number of SnapshotTooOld errors
wiredTiger.snapshot-window-settings.max target available snapshots window size in seconds
wiredTiger.snapshot-window-settings.target available snapshots window size in seconds
wiredTiger.snapshot-window-settings.current available snapshots window size in seconds
wiredTiger.snapshot-window-settings.latest majority snapshot timestamp available
wiredTiger.snapshot-window-settings.oldest majority snapshot timestamp available
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_snapshot_window_settings = mongodb_status_dict['wiredTiger']['snapshot-window-settings']
return json.dumps(mongodb_wiredTiger_snapshot_window_settings)
@staticmethod
def mongodb_status_wiredTiger_oplog(mongodb_status_dict):
"""
wiredTiger.oplog
wiredTiger.oplog.visibility timestamp
:param mongodb_status_dict:
:return:
"""
mongodb_wiredTiger_oplog = mongodb_status_dict['wiredTiger']['oplog']
return json.dumps(mongodb_wiredTiger_oplog)
@staticmethod
def mongodb_status_mem(mongodb_status_dict):
"""
mem: 报告 mongod 的系统架构和当前内存使用情况。
mem.bits: 一个数字(64 或 32),指示 MongoDB 实例是针对 64 位架构还是针对 32 位架构编译的。
mem.resident: mem.resident的值大致等于数据库进程当前使用的 RAM 大小,以兆字节 (MiB) 为单位。在正常使用过程中,该值往往会增长。
在专用数据库服务器中,此数字往往接近系统内存总量。
mem.virtual: mem.virtual显示mongod进程使用的虚拟内存量,以兆字节 (MiB) 为单位。
mem.supported: 一个布尔值,表示底层系统是否支持扩展内存信息。如果该值为 false 且系统不支持扩展内存信息,则数据库服务器可能无法访问其他mem值。
:param mongodb_status_dict:
:return:
"""
mongodb_mem = mongodb_status_dict['mem']
return json.dumps(mongodb_mem)
@staticmethod
def mongodb_status_metrics_aggStageCounters(mongodb_status_dict):
"""
metrics: 返回各种统计信息,这些信息反映正在运行的 mongod 实例的当前使用情况和状态。
metrics.aggStageCounters: 聚合管道阶段使用情况。 metrics.aggStageCounters中的字段是聚合管道阶段的名称。对于每个管道阶段,
serverStatus报告该阶段已执行的次数。
metrics.aggStageCounters.$_internalInhibitOptimization
metrics.aggStageCounters.$_internalSplitPipeline
metrics.aggStageCounters.$addFields
metrics.aggStageCounters.$bucket
metrics.aggStageCounters.$bucketAuto
metrics.aggStageCounters.$changeStream
metrics.aggStageCounters.$collStats
metrics.aggStageCounters.$count
metrics.aggStageCounters.$currentOp
metrics.aggStageCounters.$documents
metrics.aggStageCounters.$facet
metrics.aggStageCounters.$geoNear
metrics.aggStageCounters.$graphLookup
metrics.aggStageCounters.$group
metrics.aggStageCounters.$indexStats
metrics.aggStageCounters.$limit
metrics.aggStageCounters.$listLocalSessions
metrics.aggStageCounters.$listSessions
metrics.aggStageCounters.$lookup
metrics.aggStageCounters.$match
metrics.aggStageCounters.$merge
metrics.aggStageCounters.$mergeCursors
metrics.aggStageCounters.$out
metrics.aggStageCounters.$planCacheStats
metrics.aggStageCounters.$project
metrics.aggStageCounters.$queue
metrics.aggStageCounters.$redact
metrics.aggStageCounters.$replaceRoot
metrics.aggStageCounters.$replaceWith
metrics.aggStageCounters.$sample
metrics.aggStageCounters.$set
metrics.aggStageCounters.$skip
metrics.aggStageCounters.$sort
metrics.aggStageCounters.$sortByCount
metrics.aggStageCounters.$unionWith
metrics.aggStageCounters.$unset
metrics.aggStageCounters.unwind
:param mongodb_status_dict:
:return:
"""
mongodb_metrics_aggStageCounters = mongodb_status_dict['metrics']['aggStageCounters']
return json.dumps(mongodb_metrics_aggStageCounters)
@staticmethod
def mongodb_status_metrics_commands(mongodb_status_dict):
"""
metrics.commands
metrics.commands.<UNKNOWN>
metrics.commands._addShard
metrics.commands._addShard.failed
metrics.commands._addShard.
metrics.commands.total
metrics.commands._cloneCollectionOptionsFromPrimaryShard
metrics.commands._cloneCollectionOptionsFromPrimaryShard.failed
metrics.commands._cloneCollectionOptionsFromPrimaryShard.total
metrics.commands._configsvrAddShard
metrics.commands._configsvrAddShard.failed
metrics.commands._configsvrAddShard.total
metrics.commands._configsvrAddShardToZone
metrics.commands._configsvrAddShardToZone.failed
metrics.commands._configsvrAddShardToZone.total
metrics.commands._configsvrBalancerCollectionStatus
metrics.commands._configsvrBalancerCollectionStatus.failed
metrics.commands._configsvrBalancerCollectionStatus
metrics.commands._configsvrBalancerStart
metrics.commands._configsvrBalancerStart.failed
metrics.commands._configsvrBalancerStart.total
metrics.commands._configsvrBalancerStatus
metrics.commands._configsvrBalancerStatus.failed
metrics.commands._configsvrBalancerStatus.total
metrics.commands._configsvrBalancerStop
metrics.commands._configsvrBalancerStop.failed
metrics.commands._configsvrBalancerStop.total
metrics.commands._configsvrClearJumboFlag
metrics.commands._configsvrClearJumboFlag.failed
metrics.commands._configsvrClearJumboFlag.total
metrics.commands._configsvrCommitChunkMerge
metrics.commands._configsvrCommitChunkMerge.failed
metrics.commands._configsvrCommitChunkMerge.total
metrics.commands._configsvrCommitChunkMigration
metrics.commands._configsvrCommitChunkMigration.failed
metrics.commands._configsvrCommitChunkMigration.total
metrics.commands._configsvrCommitChunkSplit
metrics.commands._configsvrCommitChunkSplit.failed
metrics.commands._configsvrCommitChunkSplit.total
metrics.commands._configsvrCommitChunksMerge
metrics.commands._configsvrCommitChunksMerge.failed
metrics.commands._configsvrCommitChunksMerge.total
metrics.commands._configsvrCommitMovePrimary
metrics.commands._configsvrCommitMovePrimary.failed
metrics.commands._configsvrCommitMovePrimary.total
metrics.commands._configsvrCreateCollection
metrics.commands._configsvrCreateCollection.failed
metrics.commands._configsvrCreateCollection.total
metrics.commands._configsvrCreateDatabase
metrics.commands._configsvrCreateDatabase.failed
metrics.commands._configsvrCreateDatabase.total
metrics.commands._configsvrDropCollection
metrics.commands._configsvrDropCollection.failed
metrics.commands._configsvrDropCollection.total
metrics.commands._configsvrDropDatabase
metrics.commands._configsvrDropDatabase.failed
metrics.commands._configsvrDropDatabase.total
metrics.commands._configsvrEnableSharding
metrics.commands._configsvrEnableSharding.failed
metrics.commands._configsvrEnableSharding.total
metrics.commands._configsvrEnsureChunkVersionIsGreaterThan
metrics.commands._configsvrEnsureChunkVersionIsGreaterThan.failed
metrics.commands._configsvrEnsureChunkVersionIsGreaterThan.total
metrics.commands._configsvrMoveChunk
metrics.commands._configsvrMoveChunk.failed
metrics.commands._configsvrMoveChunk.total
metrics.commands._configsvrMovePrimary
metrics.commands._configsvrMovePrimary.failed
metrics.commands._configsvrMovePrimary.total
metrics.commands._configsvrRefineCollectionShardKey
metrics.commands._configsvrRefineCollectionShardKey.failed
metrics.commands._configsvrRefineCollectionShardKey.total
metrics.commands._configsvrRemoveShard
metrics.commands._configsvrRemoveShard.failed
metrics.commands._configsvrRemoveShard.total
metrics.commands._configsvrRemoveShardFromZone
metrics.commands._configsvrRemoveShardFromZone.failed
metrics.commands._configsvrRemoveShardFromZone.total
metrics.commands._configsvrShardCollection
metrics.commands._configsvrShardCollection.failed
metrics.commands._configsvrShardCollection.total
metrics.commands._configsvrUpdateZoneKeyRange
metrics.commands._configsvrUpdateZoneKeyRange.failed
metrics.commands._configsvrUpdateZoneKeyRange.total
metrics.commands._flushRoutingTableCacheUpdates
metrics.commands._flushRoutingTableCacheUpdates.failed
metrics.commands._flushRoutingTableCacheUpdates.total
metrics.commands._getNextSessionMods
metrics.commands._getNextSessionMods.failed
metrics.commands._getNextSessionMods.total
metrics.commands._getUserCacheGeneration
metrics.commands._getUserCacheGeneration.failed
metrics.commands._getUserCacheGeneration.total
metrics.commands._isSelf
metrics.commands._isSelf.failed
metrics.commands._isSelf.total
metrics.commands._killOperations
metrics.commands._killOperations.failed
metrics.commands._killOperations.total
metrics.commands._mergeAuthzCollections
metrics.commands._mergeAuthzCollections.failed
metrics.commands._mergeAuthzCollections.total
metrics.commands._migrateClone
metrics.commands._migrateClone.failed
metrics.commands._migrateClone.total
metrics.commands._recvChunkAbort
metrics.commands._recvChunkAbort.failed
metrics.commands._recvChunkAbort.total
metrics.commands._recvChunkStart
metrics.commands._recvChunkStart.failed
metrics.commands._recvChunkStart.total
metrics.commands._recvChunkStatus
metrics.commands._recvChunkStatus.failed
metrics.commands._recvChunkStatus.total
metrics.commands._shardsvrCloneCatalogData
metrics.commands._shardsvrCloneCatalogData.failed
metrics.commands._shardsvrCloneCatalogData.total
metrics.commands._shardsvrMovePrimary
metrics.commands._shardsvrMovePrimary.failed
metrics.commands._shardsvrMovePrimary.total
metrics.commands._shardsvrSetAllowMigrations
metrics.commands._shardsvrSetAllowMigrations.failed
metrics.commands._shardsvrSetAllowMigrations.total
metrics.commands._shardsvrShardCollection
metrics.commands._shardsvrShardCollection.failed
metrics.commands._shardsvrShardCollection.total
metrics.commands._transferMods
metrics.commands._transferMods.failed
metrics.commands._transferMods.total
metrics.commands.abortTransaction
metrics.commands.abortTransaction.failed
metrics.commands.abortTransaction.total
metrics.commands.aggregate
metrics.commands.aggregate.failed
metrics.commands.aggregate.total
metrics.commands.appendOplogNote
metrics.commands.appendOplogNote.failed
metrics.commands.appendOplogNote.total
metrics.commands.applyOps
metrics.commands.applyOps.failed
metrics.commands.applyOps.total
metrics.commands.authenticate
metrics.commands.authenticate.failed
metrics.commands.authenticate.total
metrics.commands.autoSplitVector
metrics.commands.autoSplitVector.failed
metrics.commands.autoSplitVector.total
metrics.commands.availableQueryOptions
metrics.commands.availableQueryOptions.failed
metrics.commands.availableQueryOptions.total
metrics.commands.buildInfo
metrics.commands.buildInfo.failed
metrics.commands.buildInfo.total
metrics.commands.checkShardingIndex
metrics.commands.checkShardingIndex.failed
metrics.commands.checkShardingIndex.total
metrics.commands.cleanupOrphaned
metrics.commands.cleanupOrphaned.failed
metrics.commands.cleanupOrphaned.total
metrics.commands.cloneCollectionAsCapped
metrics.commands.cloneCollectionAsCapped.failed
metrics.commands.cloneCollectionAsCapped.total
metrics.commands.collMod
metrics.commands.collMod.failed
metrics.commands.collMod.total
metrics.commands.collStats
metrics.commands.collStats.failed
metrics.commands.collStats.total
metrics.commands.commitTransaction
metrics.commands.commitTransaction.failed
metrics.commands.commitTransaction.total
metrics.commands.connPoolStats
metrics.commands.connPoolStats.failed
metrics.commands.connPoolStats.total
metrics.commands.connPoolSync
metrics.commands.connPoolSync.failed
metrics.commands.connPoolSync.total
metrics.commands.connectionStatus
metrics.commands.connectionStatus.failed
metrics.commands.connectionStatus.total
metrics.commands.convertToCapped
metrics.commands.convertToCapped.failed
metrics.commands.convertToCapped.total
metrics.commands.coordinateCommitTransaction
metrics.commands.coordinateCommitTransaction.failed
metrics.commands.coordinateCommitTransaction.total
metrics.commands.count
metrics.commands.count.failed
metrics.commands.count.total
metrics.commands.create
metrics.commands.create.failed
metrics.commands.create.total
metrics.commands.createIndexes
metrics.commands.createIndexes.failed
metrics.commands.createIndexes.total
metrics.commands.createRole
metrics.commands.createRole.failed
metrics.commands.createRole.total
metrics.commands.createUser
metrics.commands.createUser.failed
metrics.commands.createUser.total
metrics.commands.currentOp
metrics.commands.currentOp.failed
metrics.commands.currentOp.total
metrics.commands.dataSize
metrics.commands.dataSize.failed
metrics.commands.dataSize.total
metrics.commands.dbCheck
metrics.commands.dbCheck.failed
metrics.commands.dbCheck.total
metrics.commands.dbHash
metrics.commands.dbHash.failed
metrics.commands.dbHash.total
metrics.commands.dbStats
metrics.commands.dbStats.failed
metrics.commands.dbStats.total
metrics.commands.delete
metrics.commands.delete.failed
metrics.commands.delete.total
metrics.commands.distinct
metrics.commands.distinct.failed
metrics.commands.distinct.total
metrics.commands.driverOIDTest
metrics.commands.driverOIDTest.failed
metrics.commands.driverOIDTest.total
metrics.commands.drop
metrics.commands.drop.failed
metrics.commands.drop.total
metrics.commands.dropAllRolesFromDatabase
metrics.commands.dropAllRolesFromDatabase.failed
metrics.commands.dropAllRolesFromDatabase.total
metrics.commands.dropAllUsersFromDatabase
metrics.commands.dropAllUsersFromDatabase.failed
metrics.commands.dropAllUsersFromDatabase.total
metrics.commands.dropConnections
metrics.commands.dropConnections.failed
metrics.commands.dropConnections.total
metrics.commands.dropDatabase
metrics.commands.dropDatabase.failed
metrics.commands.dropDatabase.total
metrics.commands.dropIndexes
metrics.commands.dropIndexes.failed
metrics.commands.dropIndexes.total
metrics.commands.dropRole
metrics.commands.dropRole.failed
metrics.commands.dropRole.total
metrics.commands.dropUser
metrics.commands.dropUser.failed
metrics.commands.dropUser.total
metrics.commands.endSessions
metrics.commands.endSessions.failed
metrics.commands.endSessions.total
metrics.commands.explain
metrics.commands.explain.failed
metrics.commands.explain.total
metrics.commands.features
metrics.commands.features.failed
metrics.commands.features.total
metrics.commands.filemd5
metrics.commands.filemd5.failed
metrics.commands.filemd5.total
metrics.commands.find
metrics.commands.find.failed
metrics.commands.find.total
metrics.commands.findAndModify
metrics.commands.findAndModify.arrayFilters
metrics.commands.findAndModify.failed
metrics.commands.findAndModify.pipeline
metrics.commands.findAndModify.total
metrics.commands.flushRouterConfig
metrics.commands.flushRouterConfig.failed
metrics.commands.flushRouterConfig.total
metrics.commands.fsync
metrics.commands.fsync.failed
metrics.commands.fsync.total
metrics.commands.fsyncUnlock
metrics.commands.fsyncUnlock.failed
metrics.commands.fsyncUnlock.total
metrics.commands.geoSearch
metrics.commands.geoSearch.failed
metrics.commands.geoSearch.total
metrics.commands.getCmdLineOpts
metrics.commands.getCmdLineOpts.failed
metrics.commands.getCmdLineOpts.total
metrics.commands.getDatabaseVersion
metrics.commands.getDatabaseVersion.failed
metrics.commands.getDatabaseVersion.total
metrics.commands.getDefaultRWConcern
metrics.commands.getDefaultRWConcern.failed
metrics.commands.getDefaultRWConcern.total
metrics.commands.getDiagnosticData
metrics.commands.getDiagnosticData.failed
metrics.commands.getDiagnosticData.total
metrics.commands.getFreeMonitoringStatus
metrics.commands.getFreeMonitoringStatus.failed
metrics.commands.getFreeMonitoringStatus.total
metrics.commands.getLastError
metrics.commands.getLastError.failed
metrics.commands.getLastError.total
metrics.commands.getLog
metrics.commands.getLog.failed
metrics.commands.getLog.total
metrics.commands.getMore
metrics.commands.getMore.failed
metrics.commands.getMore.total
metrics.commands.getParameter
metrics.commands.getParameter.failed
metrics.commands.getParameter.total
metrics.commands.getShardMap
metrics.commands.getShardMap.failed
metrics.commands.getShardMap.total
metrics.commands.getShardVersion
metrics.commands.getShardVersion.failed
metrics.commands.getShardVersion.total
metrics.commands.getnonce
metrics.commands.getnonce.failed
metrics.commands.getnonce.total
metrics.commands.grantPrivilegesToRole
metrics.commands.grantPrivilegesToRole.failed
metrics.commands.grantPrivilegesToRole.total
metrics.commands.grantRolesToRole
metrics.commands.grantRolesToRole.failed
metrics.commands.grantRolesToRole.total
metrics.commands.grantRolesToUser
metrics.commands.grantRolesToUser.failed
metrics.commands.grantRolesToUser.total
metrics.commands.hello
metrics.commands.hello.failed
metrics.commands.hello.total
metrics.commands.hostInfo
metrics.commands.hostInfo.failed
metrics.commands.hostInfo.total
metrics.commands.insert
metrics.commands.insert.failed
metrics.commands.insert.total
metrics.commands.internalRenameIfOptionsAndIndexesMatch
metrics.commands.internalRenameIfOptionsAndIndexesMatch.failed
metrics.commands.internalRenameIfOptionsAndIndexesMatch.total
metrics.commands.invalidateUserCache
metrics.commands.invalidateUserCache.failed
metrics.commands.invalidateUserCache.total
metrics.commands.isMaster
metrics.commands.isMaster.failed
metrics.commands.isMaster.total
metrics.commands.killAllSessions
metrics.commands.killAllSessions.failed
metrics.commands.killAllSessions.total
metrics.commands.killAllSessionsByPattern
metrics.commands.killAllSessionsByPattern.failed
metrics.commands.killAllSessionsByPattern.total
metrics.commands.killCursors
metrics.commands.killCursors.failed
metrics.commands.killCursors.total
metrics.commands.killOp
metrics.commands.killOp.failed
metrics.commands.killOp.total
metrics.commands.killSessions
metrics.commands.killSessions.failed
metrics.commands.killSessions.total
metrics.commands.listCollections
metrics.commands.listCollections.failed
metrics.commands.listCollections.total
metrics.commands.listCommands
metrics.commands.listCommands.failed
metrics.commands.listCommands.total
metrics.commands.listDatabases
metrics.commands.listDatabases.failed
metrics.commands.listDatabases.total
metrics.commands.listIndexes
metrics.commands.listIndexes.failed
metrics.commands.listIndexes.total
metrics.commands.lockInfo
metrics.commands.lockInfo.failed
metrics.commands.lockInfo.total
metrics.commands.logRotate
metrics.commands.logRotate.failed
metrics.commands.logRotate.total
metrics.commands.logout
metrics.commands.logout.failed
metrics.commands.logout.total
metrics.commands.mapReduce
metrics.commands.mapReduce.failed
metrics.commands.mapReduce.total
metrics.commands.mapreduce
metrics.commands.mapreduce.shardedfinish
metrics.commands.mapreduce.shardedfinish.failed
metrics.commands.mapreduce.shardedfinish.total
metrics.commands.mergeChunks
metrics.commands.mergeChunks.failed
metrics.commands.mergeChunks.total
metrics.commands.moveChunk
metrics.commands.moveChunk.failed
metrics.commands.moveChunk.total
metrics.commands.ping
metrics.commands.ping.failed
metrics.commands.ping.total
metrics.commands.planCacheClear
metrics.commands.planCacheClear.failed
metrics.commands.planCacheClear.total
metrics.commands.planCacheClearFilters
metrics.commands.planCacheClearFilters.failed
metrics.commands.planCacheClearFilters.total
metrics.commands.planCacheListFilters
metrics.commands.planCacheListFilters.failed
metrics.commands.planCacheListFilters.total
metrics.commands.planCacheSetFilter
metrics.commands.planCacheSetFilter.failed
metrics.commands.planCacheSetFilter.total
metrics.commands.prepareTransaction
metrics.commands.prepareTransaction.failed
metrics.commands.prepareTransaction.total
metrics.commands.profile
metrics.commands.profile.failed
metrics.commands.profile.total
metrics.commands.reIndex
metrics.commands.reIndex.failed
metrics.commands.reIndex.total
metrics.commands.refreshSessions
metrics.commands.refreshSessions.failed
metrics.commands.refreshSessions.total
metrics.commands.renameCollection
metrics.commands.renameCollection.failed
metrics.commands.renameCollection.total
metrics.commands.repairDatabase
metrics.commands.repairDatabase.failed
metrics.commands.repairDatabase.total
metrics.commands.replSetAbortPrimaryCatchUp
metrics.commands.replSetAbortPrimaryCatchUp.failed
metrics.commands.replSetAbortPrimaryCatchUp.total
metrics.commands.replSetFreeze
metrics.commands.replSetFreeze.failed
metrics.commands.replSetFreeze.total
metrics.commands.replSetGetConfig
metrics.commands.replSetGetConfig.failed
metrics.commands.replSetGetConfig.total
metrics.commands.replSetGetRBID
metrics.commands.replSetGetRBID.failed
metrics.commands.replSetGetRBID.total
metrics.commands.replSetGetStatus
metrics.commands.replSetGetStatus.failed
metrics.commands.replSetGetStatus.total
metrics.commands.replSetHeartbeat
metrics.commands.replSetHeartbeat.failed
metrics.commands.replSetHeartbeat.total
metrics.commands.replSetInitiate
metrics.commands.replSetInitiate.failed
metrics.commands.replSetInitiate.total
metrics.commands.replSetMaintenance
metrics.commands.replSetMaintenance.failed
metrics.commands.replSetMaintenance.total
metrics.commands.replSetReconfig
metrics.commands.replSetReconfig.failed
metrics.commands.replSetReconfig.total
metrics.commands.replSetRequestVotes
metrics.commands.replSetRequestVotes.failed
metrics.commands.replSetRequestVotes.total
metrics.commands.replSetResizeOplog
metrics.commands.replSetResizeOplog.failed
metrics.commands.replSetResizeOplog.total
metrics.commands.replSetStepDown
metrics.commands.replSetStepDown.failed
metrics.commands.replSetStepDown.total
metrics.commands.replSetStepDownWithForce
metrics.commands.replSetStepDownWithForce.failed
metrics.commands.replSetStepDownWithForce.total
metrics.commands.replSetStepUp
metrics.commands.replSetStepUp.failed
metrics.commands.replSetStepUp.total
metrics.commands.replSetSyncFrom
metrics.commands.replSetSyncFrom.failed
metrics.commands.replSetSyncFrom.total
metrics.commands.replSetUpdatePosition
metrics.commands.replSetUpdatePosition.failed
metrics.commands.replSetUpdatePosition.total
metrics.commands.resetError
metrics.commands.resetError.failed
metrics.commands.resetError.total
metrics.commands.revokePrivilegesFromRole
metrics.commands.revokePrivilegesFromRole.failed
metrics.commands.revokePrivilegesFromRole.total
metrics.commands.revokeRolesFromRole
metrics.commands.revokeRolesFromRole.failed
metrics.commands.revokeRolesFromRole.total
metrics.commands.revokeRolesFromUser
metrics.commands.revokeRolesFromUser.failed
metrics.commands.revokeRolesFromUser.total
metrics.commands.rolesInfo
metrics.commands.rolesInfo.failed
metrics.commands.rolesInfo.total
metrics.commands.saslContinue
metrics.commands.saslContinue.failed
metrics.commands.saslContinue.total
metrics.commands.saslStart
metrics.commands.saslStart.failed
metrics.commands.saslStart.total
metrics.commands.serverStatus
metrics.commands.serverStatus.failed
metrics.commands.serverStatus.total
metrics.commands.setDefaultRWConcern
metrics.commands.setDefaultRWConcern.failed
metrics.commands.setDefaultRWConcern.total
metrics.commands.setFeatureCompatibilityVersion
metrics.commands.setFeatureCompatibilityVersion.failed
metrics.commands.setFeatureCompatibilityVersion.total
metrics.commands.setFreeMonitoring
metrics.commands.setFreeMonitoring.failed
metrics.commands.setFreeMonitoring.total
metrics.commands.setIndexCommitQuorum
metrics.commands.setIndexCommitQuorum.failed
metrics.commands.setIndexCommitQuorum.total
metrics.commands.setParameter
metrics.commands.setParameter.failed
metrics.commands.setParameter.total
metrics.commands.setShardVersion
metrics.commands.setShardVersion.failed
metrics.commands.setShardVersion.total
metrics.commands.shardConnPoolStats
metrics.commands.shardConnPoolStats.failed
metrics.commands.shardConnPoolStats.total
metrics.commands.shardingState
metrics.commands.shardingState.failed
metrics.commands.shardingState.total
metrics.commands.shutdown
metrics.commands.shutdown.failed
metrics.commands.shutdown.total
metrics.commands.splitChunk
metrics.commands.splitChunk.failed
metrics.commands.splitChunk.total
metrics.commands.splitVector
metrics.commands.splitVector.failed
metrics.commands.splitVector.total
metrics.commands.startRecordingTraffic
metrics.commands.startRecordingTraffic.failed
metrics.commands.startRecordingTraffic.total
metrics.commands.startSession
metrics.commands.startSession.failed
metrics.commands.startSession.total
metrics.commands.stopRecordingTraffic
metrics.commands.stopRecordingTraffic.failed
metrics.commands.stopRecordingTraffic.total
metrics.commands.top
metrics.commands.top.failed
metrics.commands.top.total
metrics.commands.unsetSharding
metrics.commands.unsetSharding.failed
metrics.commands.unsetSharding.total
metrics.commands.update
metrics.commands.update.arrayFilters
metrics.commands.update.failed
metrics.commands.update.pipeline
metrics.commands.update.total
metrics.commands.updateRole
metrics.commands.updateRole.failed
metrics.commands.updateRole.total
metrics.commands.updateUser
metrics.commands.updateUser.failed
metrics.commands.updateUser.total
metrics.commands.usersInfo
metrics.commands.usersInfo.failed
metrics.commands.usersInfo.total
metrics.commands.validate
metrics.commands.validate.failed
metrics.commands.validate.total
metrics.commands.voteCommitIndexBuild
metrics.commands.voteCommitIndexBuild.failed
metrics.commands.voteCommitIndexBuild.total
metrics.commands.waitForFailPoint
metrics.commands.waitForFailPoint.failed
metrics.commands.waitForFailPoint.total
metrics.commands.whatsmyuri
metrics.commands.whatsmyuri.failed
metrics.commands.whatsmyuri.total
metrics.commands.cursor
metrics.commands.cursor.timedOut
metrics.commands.cursor.open
metrics.commands.cursor.open.noTimeout
metrics.commands.cursor.open.pinned
metrics.commands.cursor.open.total
metrics.commands.document
metrics.commands.document.deleted
metrics.commands.document.inserted
metrics.commands.document.returned
metrics.commands.document.updated
metrics.commands.getLastError
metrics.commands.getLastError.wtime
metrics.commands.getLastError.wtime.num
metrics.commands.getLastError.wtime.totalMillis
metrics.commands.getLastError.wtimeouts
metrics.commands.getLastError.default
metrics.commands.getLastError.default.unsatisfiable
metrics.commands.getLastError.default.wtimeouts
metrics.commands.getLastError.default.
metrics.commands.getLastError.default.
metrics.commands.operation
metrics.commands.operation.scanAndOrder
metrics.commands.operation.writeConflicts
:param mongodb_status_dict:
:return:
"""
mongodb_metrics_commands = mongodb_status_dict['metrics']['commands']
return json.dumps(mongodb_metrics_commands)
@staticmethod
def mongodb_status_metrics_operatorCounters(mongodb_status_dict):
"""
metrics.operatorCounters
metrics.operatorCounters.match
metrics.operatorCounters.match.$all
metrics.operatorCounters.match.$alwaysFalse
metrics.operatorCounters.match.$alwaysTrue
metrics.operatorCounters.match.$and
metrics.operatorCounters.match.$bitsAllClear
metrics.operatorCounters.match.$bitsAllSet
metrics.operatorCounters.match.$bitsAnyClear
metrics.operatorCounters.match.$bitsAnySet
metrics.operatorCounters.match.$comment
metrics.operatorCounters.match.$elemMatch
metrics.operatorCounters.match.$eq
metrics.operatorCounters.match.$exists
metrics.operatorCounters.match.$expr
metrics.operatorCounters.match.$geoIntersects
metrics.operatorCounters.match.$geoWithin
metrics.operatorCounters.match.$gt
metrics.operatorCounters.match.$gte
metrics.operatorCounters.match.$in
metrics.operatorCounters.match.$jsonSchema
metrics.operatorCounters.match.$lt
metrics.operatorCounters.match.$lte
metrics.operatorCounters.match.$mod
metrics.operatorCounters.match.$ne
metrics.operatorCounters.match.$near
metrics.operatorCounters.match.$nearSphere
metrics.operatorCounters.match.$nin
metrics.operatorCounters.match.$nor
metrics.operatorCounters.match.$not
metrics.operatorCounters.match.$or
metrics.operatorCounters.match.$regex
metrics.operatorCounters.match.$sampleRate
metrics.operatorCounters.match.$size
metrics.operatorCounters.match.$text
metrics.operatorCounters.match.$type
metrics.operatorCounters.match.$where
:param mongodb_status_dict:
:return:
"""
mongodb_metrics_operatorCounters = mongodb_status_dict['metrics']['operatorCounters']
return json.dumps(mongodb_metrics_operatorCounters)
@staticmethod
def mongodb_status_metrics_query(mongodb_status_dict):
"""
metrics.query
metrics.query.planCacheTotalSizeEstimateBytes
metrics.query.updateOneOpStyleBroadcastWithExactIDCount
:param mongodb_status_dict:
:return:
"""
mongodb_metrics_query = mongodb_status_dict['metrics']['query']
return json.dumps(mongodb_metrics_query)
@staticmethod
def mongodb_status_metrics_queryExecutor(mongodb_status_dict):
"""
metrics.queryExecutor
metrics.queryExecutor.scanned
metrics.queryExecutor.scannedObjects
metrics.queryExecutor.collectionScans
metrics.queryExecutor.collectionScans.nonTailable
metrics.queryExecutor.collectionScans.total
:param mongodb_status_dict:
:return:
"""
mongodb_metrics_queryExecutor = mongodb_status_dict['metrics']['queryExecutor']
return json.dumps(mongodb_metrics_queryExecutor)
@staticmethod
def mongodb_status_metrics_record(mongodb_status_dict):
"""
metrics.record
metrics.record.moves
:param mongodb_status_dict:
:return:
"""
mongodb_metrics_record = mongodb_status_dict['metrics']['record']
return json.dumps(mongodb_metrics_record)
@staticmethod
def mongodb_status_metrics_repl(mongodb_status_dict):
"""
metrics.repl
metrics.repl.executor
metrics.repl.executor.pool
metrics.repl.executor.pool.inProgressCount
metrics.repl.executor.queues
metrics.repl.executor.queues.networkInProgress
metrics.repl.executor.queues.sleepers
metrics.repl.executor.unsignaledEvents
metrics.repl.executor.queues
metrics.repl.executor.shuttingDown
metrics.repl.executor.networkInterface
metrics.repl.apply
metrics.repl.apply.attemptsToBecomeSecondary
metrics.repl.apply.batchSize
metrics.repl.apply.batches
metrics.repl.apply.batches.num
metrics.repl.apply.batches.totalMillis
metrics.repl.apply.ops
metrics.repl.buffer
metrics.repl.buffer.count
metrics.repl.buffer.maxSizeBytes
metrics.repl.buffer.sizeBytes
metrics.repl.initialSync
metrics.repl.initialSync.completed
metrics.repl.initialSync.failedAttempts
metrics.repl.initialSync.failures
metrics.repl.network
metrics.repl.network.bytes
metrics.repl.network.getmores
metrics.repl.network.getmores.num
metrics.repl.network.getmores.totalMillis
metrics.repl.network.getmores.numEmptyBatches
metrics.repl.notPrimaryLegacyUnacknowledgedWrites
metrics.repl.notPrimaryUnacknowledgedWrites
metrics.repl.oplogGetMoresProcessed
metrics.repl.oplogGetMoresProcessed.num
metrics.repl.oplogGetMoresProcessed.totalMillis
metrics.repl.ops
metrics.repl.readersCreated
metrics.repl.replSetUpdatePosition
metrics.repl.replSetUpdatePosition.num
metrics.repl.stateTransition
metrics.repl.stateTransition.lastStateTransition
metrics.repl.stateTransition.userOperationsKilled
metrics.repl.stateTransition.userOperationsRunning
metrics.repl.syncSource
metrics.repl.syncSource.numSelections
metrics.repl.syncSource.numTimesChoseDifferent
metrics.repl.syncSource.numTimesChoseSame
metrics.repl.syncSource.numTimesCouldNotFind
:param mongodb_status_dict:
:return:
"""
mongodb_metrics_repl = mongodb_status_dict['metrics']['repl']
return json.dumps(mongodb_metrics_repl)
@staticmethod
def mongodb_status_metrics_ttl(mongodb_status_dict):
"""
metrics.ttl
metrics.ttl.deletedDocuments
metrics.ttl.passes
:param mongodb_status_dict:
:return:
"""
mongodb_metrics_ttl = mongodb_status_dict['metrics']['ttl']
return json.dumps(mongodb_metrics_ttl)
if __name__ == '__main__':
print(MongoDB_Conn.__doc__)
mongo = MongoDB_Conn('10.40.0.94')
db_connect = mongo.connect()
print(MongoDB_status.get_mongodb_status(db_connect))