公司的实战Replica Sets+Sharding方案
公司mongo集群分片实战
准备九台服务器,
分片1:
172.16.0.124:11731 主
172.16.0.127.11731 备
172.16.0.115:11731 仲裁
分片2:
172.16.0.122:11732 主
172.16.0.125:11732 备
172.16.0.103:11732 仲裁
分片3:
172.16.0.121:11733 主
172.16.0.123:11733 备
172.16.0.114:11733 仲裁
分片1步骤
--172.16.0.124分片1主
创建目录
mkdir -p /home/data/shard1_1
mkdir -p /home/Apps
tar zxvf mongodb-linux-x86_64-2.4.7.tgz
mv mongodb-linux-x86_64-2.4.7 /home/Apps/mongo
启动服务
/home/Apps/mongo/bin/mongod --shardsvr --replSet shard1 --port 11731 --dbpath /home/data/shard1_1 --logpath /home/data/shard1_1/shard1_1.log --logappend --oplogSize 5000 --fork
--172.16.0.127分片1备
mkdir -p /home/data/shard1_2
mkdir -p /home/Apps
tar zxvf mongodb-linux-x86_64-2.4.7.tgz
mv mongodb-linux-x86_64-2.4.7 /home/Apps/mongo
启动服务
/home/Apps/mongo/bin/mongod --shardsvr --replSet shard1 --port 11731 --dbpath /home/data/shard1_2 --logpath /home/data/shard1_2/shard1_2.log --logappend --oplogSize 5000 --fork
--172.16.0.115分片1仲载
mkdir -p /home/data/shard1_3
mkdir -p /home/Apps
mkdir -p /home/data/config
tar zxvf mongodb-linux-x86_64-2.4.7.tgz
mv mongodb-linux-x86_64-2.4.7 /home/Apps/mongo
/home/Apps/mongo/bin/mongod --shardsvr --replSet shard1 --port 11731 --dbpath /home/data/shard1_3 --logpath /home/data/shard1_3/shard1_3.log --logappend --oplogSize 5000 --fork
分片1启动好,初始化Replica Set1 arbiterOnly:true仲裁
在分片1主 172.16.0.124操作如下
/home/Apps/mongo/bin/mongo --port 11731
config={_id:'shard1',members:[{_id:0,host:'172.16.0.124:11731'},{_id:1,host:'172.16.0.127:11731'},{_id:2,host:'172.16.0.115:11731',arbiterOnly:true}]}
rs.initiate(config)
------------------------------------------------------------------------------------------------------------------------------
分片2步骤
--172.16.0.122分片2主
创建目录
mkdir -p /home/data/shard2_1
mkdir -p /home/Apps
tar zxvf mongodb-linux-x86_64-2.4.7.tgz
mv mongodb-linux-x86_64-2.4.7 /home/Apps/mongo
启动服务
/home/Apps/mongo/bin/mongod --shardsvr --replSet shard2 --port 11732 --dbpath /home/data/shard2_1 --logpath /home/data/shard2_1/shard2_1.log --oplogSize 50000 --logappend --fork
--172.16.0.125分片2备
mkdir -p /home/data/shard2_2
mkdir -p /home/Apps
tar zxvf mongodb-linux-x86_64-2.4.7.tgz
mv mongodb-linux-x86_64-2.4.7 /home/Apps/mongo
启动服务
/home/Apps/mongo/bin/mongod --shardsvr --replSet shard2 --port 11732 --dbpath /home/data/shard2_2 --logpath /home/data/shard2_2/shard2_2.log --oplogSize 50000 --logappend --fork
--172.16.0.103分片2仲载
mkdir -p /home/data/shard2_3
mkdir -p /home/Apps
mkdir -p /home/data/config
tar zxvf mongodb-linux-x86_64-2.4.7.tgz
mv mongodb-linux-x86_64-2.4.7 /home/Apps/mongo
/home/Apps/mongo/bin/mongod --shardsvr --replSet shard2 --port 11732 --dbpath /home/data/shard2_3 --logpath /home/data/shard2_3/shard2_3.log --oplogSize 50000 --logappend --fork
分片2启动好,初始化Replica Set 2 arbiterOnly:true仲裁
在分片2主 172.16.0.122操作如下
/home/Apps/mongo/bin/mongo --port 11732
config={_id:'shard2',members:[{_id:0,host:'172.16.0.122:11732'},{_id:1,host:'172.16.0.125:11732'},{_id:2,host:'172.16.0.103:11732',arbiterOnly:true}]}
rs.initiate(config)
-----------------------------------------------------------------------------------------------------------
分片3步骤
--172.16.0.121分片3主
创建目录
mkdir -p /home/data/shard3_1
mkdir -p /home/Apps
tar zxvf mongodb-linux-x86_64-2.4.7.tgz
mv mongodb-linux-x86_64-2.4.7 /home/Apps/mongo
启动服务
/home/Apps/mongo/bin/mongod --shardsvr --replSet shard3 --port 11733 --dbpath /home/data/shard3_1 --logpath /home/data/shard3_1/shard3_1.log --oplogSize 50000 --logappend --fork
--172.16.0.123分片3备
mkdir -p /home/data/shard2_2
mkdir -p /home/Apps
tar zxvf mongodb-linux-x86_64-2.4.7.tgz
mv mongodb-linux-x86_64-2.4.7 /home/Apps/mongo
启动服务
/home/Apps/mongo/bin/mongod --shardsvr --replSet shard3 --port 11733 --dbpath /home/data/shard3_2 --logpath /home/data/shard3_2/shard3_2.log --oplogSize 50000 --logappend --fork
--172.16.0.114分片3仲载
mkdir -p /home/data/shard3_3
mkdir -p /home/Apps
mkdir -p /home/data/config
tar zxvf mongodb-linux-x86_64-2.4.7.tgz
mv mongodb-linux-x86_64-2.4.7 /home/Apps/mongo
/home/Apps/mongo/bin/mongod --shardsvr --replSet shard3 --port 11733 --dbpath /home/data/shard3_3 --logpath /home/data/shard3_3/shard3_3.log --oplogSize 50000 --logappend --fork
分片3启动好,初始化Replica Set 3 arbiterOnly:true仲裁
在分片3主 172.16.0.121操作如下
/home/Apps/mongo/bin/mongo --port 11733
config={_id:'shard3',members:[{_id:0,host:'172.16.0.121:11733'},{_id:1,host:'172.16.0.123:11733'},{_id:2,host:'172.16.0.114:11733',arbiterOnly:true}]}
rs.initiate(config)
配置3个分片Config Server
在172.16.0.115,172.16.0.103,172.16.0.114 这三台执行操作,如下面代码
/home/Apps/mongo/bin/mongod --configsvr --dbpath /home/data/config --port 30000 --logpath /home/data/config/config.log --logappend --fork
配置3个分片Route Process
在172.16.0.115,172.16.0.103,172.16.0.114 这三台执行操作,如下面代码
/home/Apps/mongo/bin/mongos --configdb 172.16.0.115:30000,172.16.0.103:30000,172.16.0.114:30000 -port 60000 --chunkSize 1 --logpath /home/data/mongos.log --logappend --fork
配置Shard Cluster
在172.16.0.115,172.16.0.103,172.16.0.114 这三台其中一台执行操作,如下面代码
/home/Apps/mongo/bin/mongo --port 60000
use admin
db.runCommand({addshard:"shard1/172.16.0.124:11731, 172.16.0.127:11731, 172.16.0.115:11731"})
db.runCommand({addshard:"shard2/172.16.0.122:11732, 172.16.0.125:11732, 172.16.0.103:11732"})
db.runCommand({addshard:"shard3/172.16.0.121:11733, 172.16.0.123:11733, 172.16.0.114:11733"})
接下来激活分片,如下面的代码所示: 采用hash分片
db.runCommand({enablesharding:"test"})
db.runCommand({shardcollection:"test.users",key:{id:"hashed"}})
强调服务时间必须要同步,否则有问题