一、准备环境
mongoDB 分布式、非关系型数据库(一)
推荐 原创
©著作权归作者所有:来自51CTO博客作者lxl900512的原创作品,请联系作者获取转载授权,否则将追究法律责任
1、下载最新版mongodb_64位安装包:
2、解压安装配置:
# tar zxvf mongodb-linux-x86_64-2.2.3.tgz -C /usr/local/
# mv mongodb-linux-x86_64-2.2.3/ mongodb
# mkdir data_mongo
# cd data_mongo
# mkdir config
# mkdir ku1-zu
# mkdir ku2-bei
# mkdir ku3-abr
# mkdir logs
# cd ../mongodb/
# mkdir conf
# cd conf/
# touch ku1-zu.conf
# touch ku2-bei.conf
# touch ku3-abr.conf
# touch config.conf
# touch mongos.conf
【192.168.10.20上的配置文件】
# vi ku1-zu.conf
logpath = /usr/local/data_mongo/logs/ku1.log
logappend = true
dbpath = /usr/local/data_mongo/ku1_zu
shardsvr = true
replSet = rs1
bind_ip = 127.0.0.1,192.168.10.20
maxConns = 5000
fork = true
port = 27011
oplogSize = 1000
profile = 1
slowms = 500
rest = true
directoryperdb = true
journal = true
# vi ku2-bei.conf
logpath = /usr/local/data_mongo/logs/ku2.log
logappend = true
dbpath = /usr/local/data_mongo/ku2_bei
shardsvr = true
replSet = rs2
bind_ip = 127.0.0.1,192.168.10.20
maxConns = 5000
fork = true
port = 27013
oplogSize = 1000
profile = 1
slowms = 500
rest = true
directoryperdb = true
journal = true
vi ku3-abr.conf
logpath = /usr/local/data_mongo/logs/ku3.log
logappend = true
dbpath = /usr/local/data_mongo/ku3_abr
shardsvr = true
replSet = rs3
bind_ip = 127.0.0.1,192.168.10.20
maxConns = 5000
fork = true
port = 27015
vi config.conf
logpath = /usr/local/data_mongo/logs/config.log
logappend = true
dbpath = /usr/local/data_mongo/config
configsvr = true
bind_ip = 127.0.0.1,192.168.10.20
fork = true
port = 20000
vi mongos.conf
configdb = 192.168.10.20:20000,192.168.10.21:20000,192.168.10.22:20000
logpath = /usr/local/data_mongo/logs/mongos.log
logappend = true
chunkSize = 10
fork = true
maxConns = 6000
port = 30000
【192.168.10.21上的配置文件】[config.conf、mongos.conf 与10.20几乎一致略]
vi ku1-abr.conf
logpath = /usr/local/data_mongo/logs/ku1.log
logappend = true
dbpath = /usr/local/data_mongo/ku1_abr
shardsvr = true
replSet = rs1
bind_ip = 127.0.0.1,192.168.10.21
maxConns = 5000
fork = true
port = 27011
vi ku2-zu.conf
logpath = /usr/local/data_mongo/logs/ku2.log
logappend = true
dbpath = /usr/local/data_mongo/ku2_zu
shardsvr = true
replSet = rs2
bind_ip = 127.0.0.1,192.168.10.21
maxConns = 5000
fork = true
port = 27013
oplogSize = 1000
profile = 1
slowms = 500
rest = true
directoryperdb = true
journal = true
vi ku3-bei.conf
logpath = /usr/local/data_mongo/logs/ku3.log
logappend = true
dbpath = /usr/local/data_mongo/ku3_bei
shardsvr = true
replSet = rs3
bind_ip = 127.0.0.1,192.168.10.21
maxConns = 5000
fork = true
port = 27015
oplogSize = 1000
profile = 1
slowms = 500
rest = true
directoryperdb = true
journal = true
【192.168.10.22上的配置文件】[config.conf、mongos.conf 与10.20几乎一致略]
vi ku1-bei.conf
logpath = /usr/local/data_mongo/logs/ku1.log
logappend = true
dbpath = /usr/local/data_mongo/ku1_bei
shardsvr = true
replSet = rs1
bind_ip = 127.0.0.1,192.168.10.22
maxConns = 5000
fork = true
port = 27011
oplogSize = 1000
profile = 1
slowms = 500
rest = true
directoryperdb = true
journal = true
vi ku2-abr.conf
ogpath = /usr/local/data_mongo/logs/ku2.log
logappend = true
dbpath = /usr/local/data_mongo/ku2_abr
shardsvr = true
replSet = rs2
bind_ip = 127.0.0.1,192.168.10.22
maxConns = 5000
fork = true
port = 27013
vi ku3-zu.conf
logpath = /usr/local/data_mongo/logs/ku3.log
logappend = true
dbpath = /usr/local/data_mongo/ku3-zu
shardsvr = true
replSet = rs3
bind_ip = 127.0.0.1,192.168.10.22
maxConns = 5000
fork = true
port = 27015
oplogSize = 1000
profile = 1
slowms = 500
rest = true
directoryperdb = true
journal = true
3、启动:【这是192.168.10.20、其他两台都一样启动、但都必须先启动shard 也就是自定义的 ku1 ku2 ku3;再启动config 和 mongos 略】
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/conf/ku1-zu.conf
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/conf/ku2-bei.conf
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/conf/ku3-abr.conf
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/conf/config.conf
/usr/local/mongodb/bin/mongos -f /usr/local/mongodb/conf/mongos.conf
4、4-1 配置congif和路由:【ku1 ku2 ku3 每个库群 随便找一台进去对应的端口进行配置】
例如:ku1 我在192.168.10.20 上进行初始化副本集
# /usr/local/mongodb/bin/mongo --port 27011
> config = {_id:'rs1',members:[{_id:0,host:'192.168.10.20:27011',priority:0.5},{_id:1,host:'192.168.10.21:27011',arbiterOnly:true}, {_id:2,host:'192.168.10.22:27011',priority:2}]} 【 副本集 名叫 rs1,就是shard ku1 】
{
"_id" : "rs1",
"members" : [
{
"_id" : 0,
"host" : "192.168.10.20:27011",
"priority" : 0.5
},
{
"_id" : 1,
"host" : "192.168.10.21:27011",
"arbiterOnly" : true
},
{
"_id" : 2,
"host" : "192.168.10.22:27011",
"priority" : 2
}
]
}
> rs.initiate(config) 【初始化上面 配置】
> rs.reconfig({_id:'rs1',members:[{_id:0,host:'192.168.10.20:27011',priority:5},{_id:1,host:'192.168.10.21:27011',arbiterOnly:true},{_id:2,host:'192.168.10.22:27011',priority:2}]})
【这条是从新配置priority 优先级,上面指定错主了、10.20的优先级为 0.5 小于10.22的 这样10.22蒋会变成主;所以从新配置蒋 10.20的优先级调整为5,使他变成主。假如开始初始化合理;这条语句不用执行 】
rs1:PRIMARY> rs.status() 【查看 rs1 状态】
{
"set" : "rs1",
"date" : ISODate("2013-03-16T06:50:22Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "192.168.10.20:27011",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY", 【主】
"uptime" : 13129,
"optime" : Timestamp(1363416084000, 1),
"optimeDate" : ISODate("2013-03-16T06:41:24Z"),
"self" : true
},
{
"_id" : 1,
"name" : "192.168.10.21:27011",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER", 【abr】
"uptime" : 538,
"lastHeartbeat" : ISODate("2013-03-16T06:50:21Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "192.168.10.22:27011",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY", 【备】
"uptime" : 538,
"optime" : Timestamp(1363416084000, 1),
"optimeDate" : ISODate("2013-03-16T06:41:24Z"),
"lastHeartbeat" : ISODate("2013-03-16T06:50:21Z"),
"pingMs" : 0
}
],
"ok" : 1
}
4-2 配置congif和路由:【ku1 ku2 ku3 每个库群 随便找一台进去对应的端口进行配置】
例如:ku2 我还是在192.168.10.20 上进行初始化副本集
# /usr/local/mongodb/bin/mongo --port 27013
> config = {_id:'rs2',members:[{_id:0,host:'192.168.10.20:27013',priority:0.5},{_id:1,host:'192.168.10.21:27013',priority:5},{_id:2,host:'192.168.10.22:27013',arbiterOnly:true}]}
{
"_id" : "rs2",
"members" : [
{
"_id" : 0,
"host" : "192.168.10.20:27013",
"priority" : 0.5
},
{
"_id" : 1,
"host" : "192.168.10.21:27013",
"priority" : 5
},
{
"_id" : 2,
"host" : "192.168.10.22:27013",
"arbiterOnly" : true
}
]
}
> rs.initiate(config)
rs2:SECONDARY> rs.status()
{
"set" : "rs2",
"date" : ISODate("2013-03-16T07:30:37Z"),
"myState" : 2,
"syncingTo" : "192.168.10.21:27013",
"members" : [
{
"_id" : 0,
"name" : "192.168.10.20:27013",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 15350,
"optime" : Timestamp(1363418518000, 1),
"optimeDate" : ISODate("2013-03-16T07:21:58Z"),
"self" : true
},
{
"_id" : 1,
"name" : "192.168.10.21:27013",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 519,
"optime" : Timestamp(1363418518000, 1),
"optimeDate" : ISODate("2013-03-16T07:21:58Z"),
"lastHeartbeat" : ISODate("2013-03-16T07:30:37Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "192.168.10.22:27013",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 443,
"lastHeartbeat" : ISODate("2013-03-16T07:30:37Z"),
"pingMs" : 0
}
],
"ok" : 1
}
4-3 配置congif和路由:【ku1 ku2 ku3 每个库群 随便找一台进去对应的端口进行配置】
例如:ku3 我还是在192.168.10.20 上进行初始化副本集
# /usr/local/mongodb/bin/mongo --port 27015
> conf = {_id:'rs3',members:[{_id:0,host:'192.168.10.22:27015',priority:5},{_id:1,host:'192.168.10.21:27015',priority:0.5},{_id:2,host:'192.168.10.20:27015',arbiterOnly:true}]}
{
"_id" : "rs3",
"members" : [
{
"_id" : 0,
"host" : "192.168.10.22:27015",
"priority" : 5
},
{
"_id" : 1,
"host" : "192.168.10.21:27015",
"priority" : 0.5
},
{
"_id" : 2,
"host" : "192.168.10.20:27015",
"arbiterOnly" : true
}
]
}
> rs.initiate(conf)
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
rs3:PRIMARY> rs.status()
{
"set" : "rs3",
"date" : ISODate("2013-03-16T08:00:06Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "192.168.10.22:27015",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 16249,
"optime" : Timestamp(1363420713000, 1),
"optimeDate" : ISODate("2013-03-16T07:58:33Z"),
"self" : true
},
{
"_id" : 1,
"name" : "192.168.10.21:27015",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 93,
"optime" : Timestamp(1363420713000, 1),
"optimeDate" : ISODate("2013-03-16T07:58:33Z"),
"lastHeartbeat" : ISODate("2013-03-16T08:00:06Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "192.168.10.20:27015",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 93,
"lastHeartbeat" : ISODate("2013-03-16T08:00:05Z"),
"pingMs" : 0
}
],
"ok" : 1
}
5、进入mongos添加分片、
# /usr/local/mongodb/bin/mongo --port 30000
mongos> use admin
switched to db admin
mongos> db.runCommand({addshard:"rs1/192.168.10.20:27011,192.168.10.22:27011"});
{ "shardAdded" : "rs1", "ok" : 1 }
mongos> db.runCommand({addshard:"rs2/192.168.10.21:27013,192.168.10.20:27013"});
{ "shardAdded" : "rs2", "ok" : 1 }
mongos> db.runCommand({addshard:"rs3/192.168.10.22:27015,192.168.10.21:27015"});
{ "shardAdded" : "rs3", "ok" : 1 }
mongos> db.printShardingStatus() 【查看 分片情况】
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "rs1", "host" : "rs1/192.168.10.20:27011,192.168.10.22:27011" }
{ "_id" : "rs2", "host" : "rs2/192.168.10.20:27013,192.168.10.21:27013" }
{ "_id" : "rs3", "host" : "rs3/192.168.10.21:27015,192.168.10.22:27015" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "rs1" }
6、插入数据测试:
mongos> use xxoo
switched to db xxoo
mongos> db.xxxx.save({age:100,"name":"baidu"})
mongos> show dbs
admin 0.046875GB
config 0.046875GB
test (empty)
xxoo 0.203125GB 【生成 xxoo库】
mongos> use xxoo
switched to db xxoo
mongos> show tables
system.indexes
system.profile
xxxx 【成功插入xxxx表】
mongos> for(var i=1;i < 1000000;i++)db.xxxx.save({"name":"google",age:i}); 【插入 100万 数据】
mongos> db.xxxx.count() 【插入成功】
1000002
7、对 库.表分片测试:
mongos> use admin
switched to db admin
mongos> db.runCommand({enablesharding:"xxoo"}) 【首先对库允许分片】
{ "ok" : 1 }
【对表xxxx进行分片,片建为_id, 注意片键一定要是索引,_id数据创建时默认创建为了索引!】
mongos> db.runCommand({shardcollection:"xxoo.xxxx",key:{_id:1}})
{ "collectionsharded" : "xxoo.xxxx", "ok" : 1 }
mongos> use xxoo 【验收是否分片成功】
switched to db xxoo
mongos> db.xxxx.stats()
{
"sharded" : true,
"ns" : "xxoo.xxxx",
"count" : 1000002,
"numExtents" : 26,
"size" : 52000184,
"storageSize" : 132427776,
"totalIndexSize" : 32491424,
"indexSizes" : {
"_id_" : 32491424
},
"avgObjSize" : 52.00007999984,
"nindexes" : 1,
"nchunks" : 10,
"shards" : {
"rs1" : {
"ns" : "xxoo.xxxx",
"count" : 302474, 【rs1上有30多万数据】
"size" : 15728688,
"avgObjSize" : 52.00013224277128,
"storageSize" : 22507520,
"numExtents" : 7,
"nindexes" : 1,
"lastExtentSize" : 11325440,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 9827552,
"indexSizes" : {
"_id_" : 9827552
},
"ok" : 1
},
"rs2" : {
"ns" : "xxoo.xxxx",
"count" : 302475, 【rs2上也是有30多万数据】
"size" : 15728740,
"avgObjSize" : 52.000132242334075,
"storageSize" : 22507520,
"numExtents" : 7,
"nindexes" : 1,
"lastExtentSize" : 11325440,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 9827552,
"indexSizes" : {
"_id_" : 9827552
},
"ok" : 1
},
"rs3" : {
"ns" : "xxoo.xxxx",
"count" : 395053, 【rs3上也有30多万数据】
"size" : 20542756,
"avgObjSize" : 52,
"storageSize" : 87412736,
"numExtents" : 12,
"nindexes" : 1,
"lastExtentSize" : 25415680,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 12836320,
"indexSizes" : {
"_id_" : 12836320
},
"ok" : 1
}
},
"ok" : 1
}
下班了、时间有限今天就先到这了。
下一篇:我的友情链接
提问和评论都可以,用心的回复会被更多人看到
评论
发布评论
相关文章
-
分布式数据库 HBase实践
07 手机云服务数据存储
数据存储 云服务 -
分布式数据库技术的演进和发展方向
所谓的分布式数据库到底是什么?采用什么架构?优势在哪?为什么越来越多企业选择它?分布式数据库技术会向什么方向发展?
分布式数据库 数据库 数据 GaussDB(for MySQL)