Mongodb 4.0 版本 基础操作—复制集,选举方法、部署认证 (二)


文章目录

  • Mongodb 4.0 版本 基础操作---复制集,选举方法、部署认证 (二)
  • 一、MongoDB 复制集
  • 二、部署复制集
  • 2.1: 部署四个多实例
  • 2.2: 修改复制集的主从
  • 2.3: 故障转义自动切换
  • 1.先查看进程
  • 2.手工结束一个节点,查看自动选举
  • 2.4: 手动切换主键
  • 1.设置30s不参与竞选
  • 2.手动切换
  • 三 . 选举复制:
  • 3.1 在实例1中设置优先级
  • 3.2: 查看各节点之间的身份
  • 3.3: 常规操作添加数据
  • 3.4 模拟节点故障查看选举情况
  • 3.5 再次将27017恢复,发现主节点依然是他
  • 3.6 允许从节点去主节点同步信息
  • 3.7 离线升级, 更改日志文件大小
  • 1. 关闭 27018 的服务
  • 2.将配置文件信息注释
  • 3.修改配置文件,将注释取消,修改端口号
  • 4.重启服务
  • 5. 登录 27018
  • 3.8 将主位让出
  • 四: 部署秘钥认证复制
  • 4.1 修改配置文件
  • 4.2 写入4个密钥文件
  • 4.3 登录 27018 测试


一、MongoDB 复制集

  • 简介:

1.Mongodb复制集由一组Mongod实例(进程)组成,包含一个Primary节点和多个Secondary节点,Mongodb Driver(客户端)的所有数据都写入Primary,Secondary从Primary同步写入的数据,以保持复制集内所有成员存储相同的数据集,提供数据的高可用。

2、客户端在主节点写入数据,在从节点读取数据,主节点和从节点进行数据交互保证数据一致性,如果其中一个节点出了故障,其他发节点马上将业务接过来无需停机操作。

  • 优势

让数据更安全;
搞数据可用性;
灾难恢复;
无停机维护(如备份,重建索引,故障转移);
读缩放(额外的副本读取);
副本集对应用程序是透明的。

  • 特点

N个几点的群集;
任何节点可作为主节点;
所有写入操作都在主节点上;
自动故障转移;
自动恢复。

二、部署复制集

2.1: 部署四个多实例

  • 安装
1.配置yum源仓库
[root@localhost ~]# cd /etc/yum.repos.d/
[root@localhost yum.repos.d]# vim mongodb-org.repo
[mangodb-org]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.6/x86_64/
gpgcheck=1		'开启检查'
enabled=1		'开启'
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
[root@localhost yum.repos.d]# yum list	'重新加载一下'

2.安装MongoDB
[root@localhost yum.repos.d]# yum install mongodb-org -y	'安装mongodb工具包'
[root@localhost yum.repos.d]# vim /etc/mongod.conf 
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log    <----日志文件位置
storage:
  dbPath: /var/lib/mongo   <---数据文件存放位置
net:
  port: 27017  
  bindIp: 0.0.0.0   '将自己的地址改成任意地址,允许别人访问'
[root@localhost yum.repos.d]# systemctl start mongod.service  '开启服务'
[root@localhost yum.repos.d]# netstat -ntap |grep mongod
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      12857mongod  
[root@localhost yum.repos.d]# iptables -F
[root@localhost yum.repos.d]# setenforce 0
[root@localhost yum.repos.d]# mongo
exit退出

3. 配置多实例
  [root@promote bin]# cd /etc
  [root@mongod etc]# cp -p  mongod.conf mongod2.conf
  [root@mongod etc]# cp -p  mongod.conf mongod3.conf
  [root@mongod etc]# cp -p  mongod.conf mongod4.conf
  [root@promote etc]# vim mongod2.conf
  systemLog:
    destination: file
    logAppend: true
    path: /data/mongodb/mongod2.log

  # Where and how to store data.

  storage:
    dbPath: /data/mongodb/mongod2
    journal:
      enabled: true
   net:
    port: 27018
    bindIp: 0.0.0.0 
  [root@promote etc]# vim mongodb3.conf 
  systemLog:
    destination: file
    logAppend: true
    path: /data/mongodb/mongod3.log

  # Where and how to store data.

  storage:
    dbPath: /data/mongodb/mongod3
    journal:
      enabled: true
   net:
    port: 27019
    bindIp: 0.0.0.0
  [root@promote etc]# vim mongodb4.conf
  systemLog:
    destination: file
    logAppend: true
    path: /data/mongodb/mongod4.log

  # Where and how to store data.

  storage:
    dbPath: /data/mongodb/mongod4
    journal:
      enabled: true
   net:
    port: 27020
    bindIp: 0.0.0.0
  [root@promote data]# cd /data/mongodb/
  [root@promote data]# mkdir -p mongod2/
  [root@promote data]# mkdir -p mongodd3/
  [root@promote data]# mkdir -p mongodd4/
  [root@promote mongodb]# touch mongod2.log
  [root@promote mongodb]# touch mongod3.log
  [root@promote mongodb]# touch mongod4.log
  [root@test01 mongodb]# ls
  mongod2  mongod2.log  mongod3  mongod3.log  mongod4  mongod4.log
  [root@promote mongodb]# chmod 777 mongod*


  [root@localhost ~ ]# mongod -f /etc/mongod2.conf --shutdown
  [root@localhost s~]# mongod -f /etc/mongod2.conf
  [root@localhost ~ ]# mongod -f /etc/mongod3.conf --shutdown
  [root@localhost s~]# mongod -f /etc/mongod3.conf
  [root@localhost ~ ]# mongod -f /etc/mongod4.conf --shutdown
  [root@localhost s~]# mongod -f /etc/mongod4.conf

2.2: 修改复制集的主从

  • 修改复制集的主从
[root@localhost ~]# mongo
> cfg={"_id":"lpf","members":[{"_id":0,"host":"192.168.100.140:27017"},{"_id":1,"host":"192.168.100.140:27018"},{"_id":2,"host":"192.168.100.140:27019"}]}
{
	"_id" : "lpf",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.100.140:27017"
		},
		{
			"_id" : 1,
			"host" : "192.168.100.140:27018"
		},
		{
			"_id" : 2,
			"host" : "192.168.100.140:27019"
		}
	]
}
> db.stats()
{
	"operationTime" : Timestamp(0, 0),
	"ok" : 0,
	"errmsg" : "node is not in primary or recovering state",
	"code" : 13436,
	"codeName" : "NotMasterOrSecondary",
	"$clusterTime" : {
		"clusterTime" : Timestamp(0, 0),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
  > rs.initiate(cfg)			"启动复制集"
{
	"ok" : 1,
	"operationTime" : Timestamp(1600568185, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1600568185, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

  > db.stats()	'查看复制集'
{
	"operationTime" : Timestamp(0, 0),
	"ok" : 0,
	"errmsg" : "node is not in primary or recovering state",
	"code" : 13436,
	"codeName" : "NotMasterOrSecondary",
	"$clusterTime" : {
		"clusterTime" : Timestamp(0, 0),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

  > rs.initiate(cfg)	'启动复制集'
{
	"ok" : 1,
	"operationTime" : Timestamp(1600103422, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1600103422, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
lpf:PRIMARY> rs.status()						"查看复制集状态"
{
	"set" : "lpf",
	"date" : ISODate("2020-09-20T02:21:54.811Z"),
	"myState" : 1,
	"term" : NumberLong(1),
	"syncingTo" : "",
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1600568512, 2),
			"t" : NumberLong(1)
		},
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1600568512, 2),
			"t" : NumberLong(1)
		},
		"appliedOpTime" : {
			"ts" : Timestamp(1600568512, 2),
			"t" : NumberLong(1)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1600568512, 2),
			"t" : NumberLong(1)
		}
	},
	"members" : [
		{
			"_id" : 0,
			"name" : "192.168.100.140:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 903,
			"optime" : {
				"ts" : Timestamp(1600568512, 2),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2020-09-20T02:21:52Z"),
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "",
			"electionTime" : Timestamp(1600568196, 1),
			"electionDate" : ISODate("2020-09-20T02:16:36Z"),
			"configVersion" : 1,
			"self" : true,
			"lastHeartbeatMessage" : ""
		},
		{
			"_id" : 1,
			"name" : "192.168.100.140:27018",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 328,
			"optime" : {
				"ts" : Timestamp(1600568512, 2),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1600568512, 2),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2020-09-20T02:21:52Z"),
			"optimeDurableDate" : ISODate("2020-09-20T02:21:52Z"),
			"lastHeartbeat" : ISODate("2020-09-20T02:21:52.907Z"),
			"lastHeartbeatRecv" : ISODate("2020-09-20T02:21:53.185Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "192.168.100.140:27017",
			"syncSourceHost" : "192.168.100.140:27017",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 1
		},
		{
			"_id" : 2,
			"name" : "192.168.100.140:27019",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 328,
			"optime" : {
				"ts" : Timestamp(1600568512, 2),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1600568512, 2),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2020-09-20T02:21:52Z"),
			"optimeDurableDate" : ISODate("2020-09-20T02:21:52Z"),
			"lastHeartbeat" : ISODate("2020-09-20T02:21:52.907Z"),
			"lastHeartbeatRecv" : ISODate("2020-09-20T02:21:53.185Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "192.168.100.140:27017",
			"syncSourceHost" : "192.168.100.140:27017",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 1
		}
	],
	"ok" : 1,
	"operationTime" : Timestamp(1600568512, 2),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1600568512, 2),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}


  > lpf:PRIMARY>        '变为主'
  > [root@localhost logs]# mongo --port 27018    '切换为18'
  > lpf:SECONDARY>        '18变为从'
  > [root@localhost logs]# mongo --port 27017    '切换为17'
  > lpf:SECONDARY>      '17变为从'
  > lpf:PRIMARY> rs.status()     '查看复制级状态 ''
  • 添加复制集节点

lpf:PRIMARY> rs.add(“192.168.100.140:27020”)
{
“ok” : 1,
“operationTime” : Timestamp(1600568634, 1),
“$clusterTime” : {
“clusterTime” : Timestamp(1600568634, 1),
“signature” : {
“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),
“keyId” : NumberLong(0)
}
}
}

- 删除节点

```handlebars
lpf:PRIMARY> rs.remove("192.168.100.140:27020")

2.3: 故障转义自动切换

1.先查看进程

[root@mongodb ~]# ps aux |grep mongod
root      10840  0.5  3.8 1594444 110620 ?      Sl   10:06   0:09 mongod -f /etc/mongod.conf
root      10874  0.5  3.4 1482568 97992 ?       Sl   10:06   0:09 mongod -f /etc/mongod2.conf
root      10908  0.5  3.4 1538984 98320 ?       Sl   10:06   0:09 mongod -f /etc/mongod3.conf
root      10942  0.5  3.2 1466056 93796 ?       Sl   10:06   0:09 mongod -f /etc/mongod4.conf
root      11874  0.0  0.0 112724   988 pts/0    S+   10:34   0:00 grep --color=auto mongod

2.手工结束一个节点,查看自动选举

[root@mongodb ~]# ps aux |grep mongod
root      10840  0.5  3.8 1594444 110620 ?      Sl   10:06   0:09 mongod -f /etc/mongod.conf
root      10874  0.5  3.4 1482568 97992 ?       Sl   10:06   0:09 mongod -f /etc/mongod2.conf
root      10908  0.5  3.4 1538984 98320 ?       Sl   10:06   0:09 mongod -f /etc/mongod3.conf
root      10942  0.5  3.2 1466056 93796 ?       Sl   10:06   0:09 mongod -f /etc/mongod4.conf
root      11874  0.0  0.0 112724   988 pts/0    S+   10:34   0:00 grep --color=auto mongod
[root@mongodb ~]# kill -9 10840
[root@mongodb ~]# mongo --port 27018 
lpf:SECONDARY>
[root@mongodb ~]# mongo --port 27019 
lpf:SECONDARY>
[root@mongodb ~]# mongo --port 27020
lpf:PRIMARY>
"发现27020节点变为主"

2.4: 手动切换主键

1.设置30s不参与竞选

lpf:PRIMARY> rs.freeze(30)
{
	"ok" : 0,
	"errmsg" : "cannot freeze node when primary or running for election. state: Primary",
	"code" : 95,
	"codeName" : "NotSecondary",
	"operationTime" : Timestamp(1600570363, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1600570363, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

2.手动切换

交出主节点位置,维持从节点状态不少于60s,等待30s使主节点和从节点日志同步(交换主从)

lpf:PRIMARY> rs.stepDown(60,30)
2020-09-20T10:53:40.678+0800 E QUERY    [thread1] Error: error doing query: failed: network error while attempting to run command 'replSetStepDown' on host '127.0.0.1:27020'  :
DB.prototype.runCommand@src/mongo/shell/db.js:168:1
DB.prototype.adminCommand@src/mongo/shell/db.js:186:16
rs.stepDown@src/mongo/shell/utils.js:1404:12
@(shell):1:1
2020-09-20T10:53:40.680+0800 I NETWORK  [thread1] trying reconnect to 127.0.0.1:27020 (127.0.0.1) failed
2020-09-20T10:53:40.681+0800 I NETWORK  [thread1] reconnect 127.0.0.1:27020 (127.0.0.1) ok
lpf:SECONDARY> 
sha:SECONDARY> rs.status()
.....
		{
			"_id" : 1,
			"name" : "192.168.100.140:27018",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 1821,
			"optime" : {
				"ts" : Timestamp(1600570452, 1),
				"t" : NumberLong(3)
			},
"主切换到了27018"

三 . 选举复制:

1.标准节点 2.仲裁节点 3.被动节点
通过优先级设点,优先级高的是标准节点,低的是被动节点;最后是仲裁节点
重新创建4个实例

3.1 在实例1中设置优先级

> cfg={"_id":"lpfs","members":[{"_id":0,"host":"192.168.100.140:27017","priority":100},{"_id":1,"host":"192.168.100.140:27018","priority":100},{"_id":2,"host":"192.168.100.140:27019","priority":0},{"_id":3,"host":"192.168.100.140:27020","arbiterOnly":true}]}
 > rs.initiate(cfg)

3.2: 查看各节点之间的身份

lpfs:SECONDARY> rs.isMaster() 
{
	"hosts" : [
		"192.168.100.140:27017",          //标准节点
		"192.168.100.140:27018"
	],
	"passives" : [
		"192.168.100.140:27019"                //被动节点
	],
	"arbiters" : [
		"192.168.100.140:27020"               //仲裁节点
	],
	"setName" : "lpfs",
	"setVersion" : 1,
	"ismaster" : true,
	"secondary" : false,
	"primary" : "192.168.100.140:27017",
	"me" : "192.168.100.140:27017",
	"electionId" : ObjectId("7fffffff0000000000000001"),
	"
lpfs:PRIMARY>

3.3: 常规操作添加数据

lpfs:PRIMARY>  use test       ##使用test库
switched to db test
lpfs:PRIMARY> db.t1.insert({"id":1,"name":"jack"})               #创建id为1的“jack”用户
WriteResult({ "nInserted" : 1 })
lpfs:PRIMARY> db.t1.insert({"id":2,"name":"tom"})                 #创建id为2的“tom”用户
WriteResult({ "nInserted" : 1 })
lpfs:PRIMARY> db.t1.find()
{ "_id" : ObjectId("5f5c987603a23cf20b5905e0"), "id" : 1, "name" : "jack" }
{ "_id" : ObjectId("5f5c988203a23cf20b5905e1"), "id" : 2, "name" : "tom" }
修改数据
lpfs:PRIMARY> db.t1.update({"id":1},{$set:{"name":"jerrt"}})
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
lpfs:PRIMARY> db.t1.find()
{ "_id" : ObjectId("5f5c987603a23cf20b5905e0"), "id" : 1, "name" : "jerrt" }
{ "_id" : ObjectId("5f5c988203a23cf20b5905e1"), "id" : 2, "name" : "tom" }
删除数据
lpfs:PRIMARY> db.t1.remove({"id":2})
WriteResult({ "nRemoved" : 1 })
lpfs:PRIMARY> db.t1.find()
{ "_id" : ObjectId("5f5c987603a23cf20b5905e0"), "id" : 1, "name" : "jerrt" }
lpfs:PRIMARY>
查看日志文件
lpfs:PRIMARY> show dbs
admin   0.000GB
config  0.000GB
local   0.000GB    //##local库存放的日志
test    0.000GB
lpfs:PRIMARY> use local
switched to db local
lpfs:PRIMARY> show collections          ##查看集合
oplog.rs                                   ##存放了日志信息
replset.election
replset.minvalid
replset.oplogTruncateAfterPoint
startup_log
lpfs:PRIMARY> db.oplog.rs.find()    ##这个命令可以查看日志记录
{ "ts" : Timestamp(1599903569, 1), "h" : NumberLong("2054745077148917962"),

3.4 模拟节点故障查看选举情况

mongod -f /etc/mongod.conf --shutdown     ##将主节点宕机
mongo --port 27018                		##用主键中的27018去登录
MongoDB shell version v4.0.0
connecting to: mongodb://127.0.0.1:27018/                  
MongoDB server version: 4.0.0
Server has startup warnings:


lpfs:PRIMARY> exit     发现27018变为主键
bye
#这时将27018也宕机,查看27019会不会参与选举
mongod -f /etc/mongod2.conf --shutdown    

##发现27019不参与选举
lpfs:SECONDARY> exit
bye

3.5 再次将27017恢复,发现主节点依然是他

mongod -f /etc/mongod.conf

mongo

lpfs:PRIMARY> exit
bye

mongod -f /etc/mongod2.conf

3.6 允许从节点去主节点同步信息

mongo --port 27018    ##登录到27018节点上
MongoDB shell version v4.0.0
connecting to: mongodb://127.0.0.1:27018/
MongoDB server version: 4.0.0
Server has startup warnings:

lpfs:SECONDARY> show dbs     ##发现查看不到数据库信息
2020-09-20T17:51:17.180+0800 E QUERY    [js] Error: listDatabases failed:{
	"operationTime" : Timestamp(1599904272, 1),
	"ok" : 0,
	"errmsg" : "not master and slaveOk=false",
	"code" : 13435,
	"codeName" : "NotMasterNoSlaveOk",
	"$clusterTime" : {
		"clusterTime" : Timestamp(1599904272, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:65:1
shellHelper.show@src/mongo/shell/utils.js:865:19
shellHelper@src/mongo/shell/utils.js:755:15
@(shellhelp2):1:1

shars:SECONDARY> rs.slaveOk()     ##允许从节点去读取信息
shars:SECONDARY> show dbs     查询到了结果数据库信息
admin   0.000GB
config  0.000GB
local   0.000GB
test    0.000GB
lpfs:SECONDARY> rs.printReplicationInfo()      ##查看日志文件大小
configured oplog size:   4329.4130859375MB    ##日志文件大小为4329MB
log length start to end: 803secs (0.22hrs)
oplog first event time:  Sat Sep 20 2020 17:39:29 GMT+0800 (CST)
oplog last event time:   Sat Sep 20 2020 17:52:52 GMT+0800 (CST)
now:                     Sat Sep 20 2020 17:52:59 GMT+0800 (CST)
lpfs:SECONDARY>  rs.printSlaveReplicationInfo()    ##查看哪些节点可以找主节点同步
source: 192.168.100.20:27018
	syncedTo: Sat Sep 20 2020 17:53:12 GMT+0800 (CST)
	-10 secs (0 hrs) behind the primary
source: 192.168.100.20:27019
	syncedTo: Sat Sep 20 2020 17:53:02 GMT+0800 (CST)
	0 secs (0 hrs) behind the primary
lpfs:SECONDARY>
仲裁节点不会去找主节点同步信息

3.7 离线升级, 更改日志文件大小

1. 关闭 27018 的服务

lpfs:SECONDARY> use admin
switched to db admin
shars:SECONDARY> db.shutdownServer()
server should be down...
2020-09-20T17:55:03.820+0800 I NETWORK  [js] trying reconnect to 127.0.0.1:27018 failed
2020-09-20T17:55:03.820+0800 I NETWORK  [js] reconnect 127.0.0.1:27018 failed failed
2020-09-20T17:55:03.823+0800 I NETWORK  [js] trying reconnect to 127.0.0.1:27018 failed
2020-09-20T17:55:03.823+0800 I NETWORK  [js] reconnect 127.0.0.1:27018 failed failed
> exit
bye


shars:PRIMARY> rs.help()    查看命令帮助信息

2.将配置文件信息注释

[root@localhost logs]# vim /etc/mongod2.conf 
net:
  port: 27028    ##将端口号改为27028
  bindIp: 0.0.0.0  # Listen to local interface only, comment to listen on all interfaces.
#replication:                     ##将参数注销  
#     replSetName: shars
[root@pc-2 mongodb]# vim /etc/mongod2.conf
[root@pc-2 mongodb]# mongod -f /etc/mongod2.conf
2020-09-12T17:58:27.287+0800 I CONTROL  [main] Automatically disabling TLS 1.0, toforce-enable TLS 1.0 specify --sslDisabledProtocols 'none'
about to fork child process, waiting until server is ready for connections.
forked process: 20417
child process started successfully, parent exiting
[root@pc-2 mongodb]# mongodump --port 27028 --db local --collection 'oplog.rs'    ##备份日志信息
2020-09-12T17:58:47.952+0800	writing local.oplog.rs to
2020-09-12T17:58:47.954+0800	done dumping local.oplog.rs (94 documents)
[root@pc-2 mongodb]# mongo --port 27028  重新登录
MongoDB shell version v4.0.0
connecting to: mongodb://127.0.0.1:27028/
MongoDB server version: 4.0.0
Server has startup warnings:

> use local
switched to db local      ##使用local库
> show tables
oplog.rs
replset.election
replset.minvalid
replset.oplogTruncateAfterPoint
startup_log
> db.oplog.rs.drop()    ##将日志库删除
true
> db.run
local.run
>  db.runCommand( { create: "oplog.rs", capped: true, size: (2 * 1024 * 1024) } )          ##重建日志信息库
{ "ok" : 1 }                
> use admin      ##改完之后将服务停止
switched to db admin
> db.shutdownServer()
server should be down...
2020-09-12T18:00:45.909+0800 I NETWORK  [js] trying reconnect to 127.0.0.1:27028 failed
2020-09-12T18:00:45.909+0800 I NETWORK  [js] reconnect 127.0.0.1:27028 failed failed

3.修改配置文件,将注释取消,修改端口号

[root@pc-2 mongodb]# vim /etc/mongod2.conf

# network interfaces
net:
  port: 27018
  bindIp: 0.0.0.0   # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.


#security:

#operationProfiling:

replication:
      replSetName: shars
      oplogSizeMB: 2048         ##添加日志库大小为2048M

4.重启服务

root@pc-2 mongodb]# mongod -f /etc/mongod2.conf  --shutdown

[root@pc-2 mongodb]# mongod -f /etc/mongod2.conf

5. 登录 27018

[root@pc-2 mongodb]# mongo --port 27018
MongoDB shell version v4.0.0

shars:SECONDARY>  rs.printReplicationInfo()      ##查看文件大小
configured oplog size:   2MB    ///修改成功 2M

log length start to end: 20secs (0.01hrs)
oplog first event time:  Sat Sep 12 2020 18:04:52 GMT+0800 (CST)
oplog last event time:   Sat Sep 12 2020 18:05:12 GMT+0800 (CST)
now:                     Sat Sep 12 2020 18:05:12 GMT+0800 (CST)
shars:SECONDARY>

3.8 将主位让出

##登录到主节点上
[root@pc-2 mongodb]# mongo
MongoDB shell version v4.0.0
##将主位让出
shars:PRIMARY> rs.stepDown()
2020-09-12T18:06:27.019+0800 E QUERY    [js] Error: error doing query: failed: network error while attempting to run command 'replSetStepDown' on host '127.0.0.1:27017'  :
DB.prototype.runCommand@src/mongo/shell/db.js:168:1
DB.prototype.adminCommand@src/mongo/shell/db.js:186:16
rs.stepDown@src/mongo/shell/utils.js:1398:12
@(shell):1:1
2020-09-12T18:06:27.021+0800 I NETWORK  [js] trying reconnect to 127.0.0.1:27017 failed
2020-09-12T18:06:27.022+0800 I NETWORK  [js] reconnect 127.0.0.1:27017 ok
shars:SECONDARY> exit
bye

##再次登陆的27018节点上
[root@pc-2 mongodb]# mongo --port 27018
MongoDB shell version v4.0.0


shars:PRIMARY>     ##27018变为主节点

四: 部署秘钥认证复制

shars:PRIMARY> use admin
switched to db admin
shars:PRIMARY> db.createUser({"user":"root","pwd":"123","roles":["root"]})
Successfully added user: { "user" : "root", "roles" : [ "root" ] }
shars:PRIMARY> exit
bye

4.1 修改配置文件

[root@pc-2 mongodb]# vim /etc/mongod.conf
[root@pc-2 mongodb]# vim /etc/mongod2.conf
[root@pc-2 mongodb]# vim /etc/mongod3.conf
[root@pc-2 mongodb]# vim /etc/mongod4.conf
"添加"
security:
   keyFile: /usr/bin/sharskey1        ##验证路径
   clusterAuthMode: keyFile          ##密钥文件验证
  
security:
   keyFile: /usr/bin/sharskey2
   clusterAuthMode: keyFile

security:
   keyFile: /usr/bin/sharskey3
   clusterAuthMode: keyFile

security:
   keyFile: /usr/bin/sharskey4
   clusterAuthMode: keyFile

4.2 写入4个密钥文件

[root@localhost logs]# cd /usr/bin/
[root@localhost bin]# echo "shars key" > sharskey1
[root@localhost bin]# echo "shars key" >sharskey2
[root@localhost bin]# echo "shars key" > sharskey3
[root@localhost bin]# echo "shars key" > sharskey4
[root@localhost bin]# chmod 600 sha*    ##将文件权限改为600

4.3 登录 27018 测试

[root@pc-2 bin]# mongo --port 27018
MongoDB shell version v4.0.0
connecting to: mongodb://127.0.0.1:27018/
MongoDB server version: 4.0.0
shars:SECONDARY> exit
bye
[root@pc-2 bin]# mongo --port 27017
MongoDB shell version v4.0.0
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 4.0.0
shars:PRIMARY> show dbs    //  登录到主位节点,验证认证测试
2020-09-12T18:29:51.682+0800 E QUERY    [js] Error: listDatabases failed:{
	"operationTime" : Timestamp(1599906589, 1),
	"ok" : 0,
	"errmsg" : "command listDatabases requires authentication",
	"code" : 13,
	"codeName" : "Unauthorized",
	"$clusterTime" : {
		"clusterTime" : Timestamp(1599906589, 1),
		"signature" : {
			"hash" : BinData(0,"AStJmqe4VFYS0OmsYUypbImu66o="),
			"keyId" : NumberLong("6871533557148286977")
		}
	}
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:65:1
shellHelper.show@src/mongo/shell/utils.js:865:19
shellHelper@src/mongo/shell/utils.js:755:15
@(shellhelp2):1:1
shars:PRIMARY> use admin
switched to db admin
shars:PRIMARY> db.auth("root","123")        ##用root身份去登录
1
shars:PRIMARY> show dbs      ##可以查看数据库信息了
admin   0.000GB
config  0.000GB
local   0.000GB
test    0.000GB
shars:PRIMARY>
从位的节点都需要登录root用户才可以查看数据库信息