6 Mongodb安装与连接测试:

6.1 安装

1下载最新的mongodb

目前最新的版本为:mongodb-linux-x86_64-2.4.6rc0.tgz


2 解压:

#tar -zxvf mongodb-linux-x86_64-2.4.6-rc0.tgz

3 清除本地化设置:

# export LC_ALL="C"


注意:不执行这一步,启动mongodb可能在初始化的时候会报错

4 启动mongodb之前的准备:

创建mongodb的数据库目录: #mkdir  -p /mongodata/db

创建mongodb的日志文件:   #mkdir /mongodata/log

  #touch /mongodata/log/mongodb.log

6.2 启动:


./mongod --dbpath=/mongodata/db --logpath=/mongodata/log/mongodb.log --fork

6 .3 安装mongodbpython驱动:


#tar -zxvf mongo-python-driver-2.6.2.tar.gz

#cd mongo-python-driver-2.6.2

#python setup.py install


6.4 python连接测试


使用python脚本尝试连接数据库:



#!/usr/bin/python


import pymongo

import random


conn = pymongo.Connection("127.0.0.1",27017)

db = conn.test

db.authenticate("root","root.com")

db.user.save({'id':1,'name':'kaka','sex':'male'})

for id in range(2,10):

   name = random.choice(['steve','koby','owen','tody','rony'])

   sex = random.choice(['male','female'])

   db.user.insert({'id':id,'name':name,'sex':sex})

content = db.user.find()

for i in content:

   print i



保存为conn_mongodb.py


执行程序


root@debian:/usr/local/mongodb/bin# python /root/conn_mongodb.py

{u'_id': ObjectId('52317dbc6e95524a10505709'), u'id': 1, u'name': u'kaka', u'sex': u'male'}

{u'_id': ObjectId('52317dbc6e95524a1050570a'), u'id': 2, u'name': u'tody', u'sex': u'male'}

{u'_id': ObjectId('52317dbc6e95524a1050570b'), u'id': 3, u'name': u'rony', u'sex': u'male'}

{u'_id': ObjectId('52317dbc6e95524a1050570c'), u'id': 4, u'name': u'owen', u'sex': u'female'}

{u'_id': ObjectId('52317dbc6e95524a1050570d'), u'id': 5, u'name': u'koby', u'sex': u'female'}

{u'_id': ObjectId('52317dbc6e95524a1050570e'), u'id': 6, u'name': u'steve', u'sex': u'male'}

{u'_id': ObjectId('52317dbc6e95524a1050570f'), u'id': 7, u'name': u'tody', u'sex': u'male'}

{u'_id': ObjectId('52317dbc6e95524a10505710'), u'id': 8, u'name': u'koby', u'sex': u'female'}

{u'_id': ObjectId('52317dbc6e95524a10505711'), u'id': 9, u'name': u'koby', u'sex': u'male'}

{u'_id': ObjectId('52317dc26e95524a16f4b4cd'), u'id': 1, u'name': u'kaka', u'sex': u'male'}

{u'_id': ObjectId('52317dc26e95524a16f4b4ce'), u'id': 2, u'name': u'owen', u'sex': u'male'}

{u'_id': ObjectId('52317dc26e95524a16f4b4cf'), u'id': 3, u'name': u'tody', u'sex': u'female'}

{u'_id': ObjectId('52317dc26e95524a16f4b4d0'), u'id': 4, u'name': u'koby', u'sex': u'female'}

{u'_id': ObjectId('52317dc26e95524a16f4b4d1'), u'id': 5, u'name': u'tody', u'sex': u'male'}

{u'_id': ObjectId('52317dc26e95524a16f4b4d2'), u'id': 6, u'name': u'tody', u'sex': u'female'}

{u'_id': ObjectId('52317dc26e95524a16f4b4d3'), u'id': 7, u'name': u'tody', u'sex': u'female'}

{u'_id': ObjectId('52317dc26e95524a16f4b4d4'), u'id': 8, u'name': u'rony', u'sex': u'female'}

{u'_id': ObjectId('52317dc26e95524a16f4b4d5'), u'id': 9, u'name': u'owen', u'sex': u'male'}


表示python连接mongodb正常。




7  Mongodb 数据分片存储:

7.1 架构


Mongodb的分片技术即Sharding架构


     就是把海量数据水平扩展的集群系统,数据分表存储在Sharding的各个节点上。

     Mongodb的数据分开分为chunk,每个chunk都是collection中的一段连续的数据记录,一般为200MB,超出则生成新的数据块。


7.1.1 构建Sharding需要三种角色


shard服务器(Shard Server)Shard服务器是存储实际数据的分片,每个Shard可以是一个mongod实例,也可以是一组mongod实例构成的Replica Sets,为了实现每个Shard内部的故障自动切换,MongoDB官方建议每个Shard为一组Replica Sets

配置服务器(Config Server)为了将一个特定的collection存储在多个Shard中,需要为该collection指定一个Shard key,决定该条记录属于那个chunk,配置服务器可以存储以下信息:


1,所有Shard节点的配置信息

2,每个chunkShard key范围   

3chunk在各Shard的分布情况   

4,集群中所有DBcollectionShard配置信息


路由进程(Route Process):一个前端路由,客户端由此接入,首先询问配置服务器需要到那个Shard上查询或保存记录,然后连接相应Shard执行操作,最后将结果返回客户端。客户端只需 将原本发给mongod的查询活更新请求原封不动的发给路由器进程,而不必关心所操作的记录存储在那个Shard上。


7.1.2架构图:


spacer.gif

spacer.gifspacer.gifspacer.gif


spacer.gifspacer.gifspacer.gifspacer.gif




spacer.gifspacer.gifspacer.gifspacer.gif



spacer.gifspacer.gifspacer.gifspacer.gif




spacer.gif

spacer.gifspacer.gifspacer.gif




spacer.gifspacer.gifspacer.gif






spacer.gifspacer.gifspacer.gif







spacer.gif



7.2 切片准备:

7.2.1安装

说明:10.15.62.202 以下使用简称server1

  10.15.62.202 以下使用简称server2

  10.15.62.205 以下使用简称server3


1   解压mongodb重命名到/opt目录下:(server1,server2,server3均执行此操作)


#tar -zxvf mongodb-linux-x86_64-2.4.6.tgz && mv mongodb-linux-x86_64-2.4.6 /opt/mongodb && rm -rf mongodb-linux-x86_64-2.4.6.tgz


  2    每台服务器中,创建日志和验证目录,创建MongoDB用户组,创建MongoDB用户同时设置所属MongoDB用户组,设置MongoDB用户密码:


#mkdir -p /opt/mongodb/log /opt/mongodb/security && groupadd mongodb && useradd -g mongodb mongodb && passwd mongodb


  3 创建安全key(server1,server2,server3均执行此操作)


#openssl rand -base64 741 > /opt/mongodb/security/mongo.key

#chmod 0600 /opt/mongodb/security/mongo.key

(不执行该命令会出现报错:644 permissions on /opt/mongodb/security/mongo.key are too open



  4  数据库程序目录结构:


root@debian:/opt/mongodb# tree /opt/mongodb/

/opt/mongodb/

├── bin

├── bsondump

├── mongo

├── mongod

├── mongodump

├── mongoexport

├── mongofiles

├── mongoimport

├── mongooplog

├── mongoperf

├── mongorestore

├── mongos

├── mongosniff

├── mongostat

└── mongotop

├── GNU-AGPL-3.0

├── log

├── README

├── security

└── mongo.key

└── THIRD-PARTY-NOTICES


3 directories, 18 files

root@debian:/opt/mongodb#


 7.2.2 创建数据库和日志目录:


Server1


#mkdir -p /data/shard10001 /data/shard20001 /data/shard30001 /data/config1  && chown -R mongodb:mongodb /data/shard10001 /data/shard20001 /data/shard30001 /data/config1


Server2


#mkdir -p /data/shard10002 /data/shard20002 /data/shard30002 /data/config2  && chown -R mongodb:mongodb /data/shard10002 /data/shard20002 /data/shard30002 /data/config2


Server3


#mkdir -p /data/shard10003 /data/shard20003 /data/shard30003 /data/config3  && chown -R mongodb:mongodb /data/shard10003 /data/shard20003 /data/shard30003 /data/config3


7.2.3 创建mongod sharding启动文件:


Server1


一、新建配置文件 /opt/mongodb/security/shard10001.conf 加入如下内容:


dbpath=/data/shard10001

shardsvr=true

replSet=shard1

fork = true

port=10001

oplogSize=100

logpath=/opt/mongodb/log/shard10001.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


二、新建配置文件 /opt/mongodb/security/shard20001.conf 加入如下内容:


dbpath=/data/shard20001

shardsvr=true

replSet=shard2

fork = true

port=10002

oplogSize=100

logpath=/opt/mongodb/log/shard20001.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


三、新建配置文件 /opt/mongodb/security/shard30001.conf 加入如下内容:



dbpath=/data/shard30001

shardsvr=true

replSet=shard3

fork = true

port=10003

oplogSize=100

logpath=/opt/mongodb/log/shard30001.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


四、新建配置文件/opt/mongodb/security/config1.conf加入如下内容:


dbpath=/data/config1

configsvr=true

fork = true

port=20000

oplogSize=5

logpath=/opt/mongodb/log/config1.log

quiet=true

keyFile=/opt/mongodb/security/mongo.key


五、新建配置文件/opt/mongodb/security/mongos1.conf加入如下内容:


configdb=10.15.62.202:20000,10.15.62.203:20000,10.15.62.205:20000

port=30000

fork = true

chunkSize=5

logpath=/opt/mongodb/log/mongos1.log

quiet=true

keyFile=/opt/mongodb/security/mongo.key


Server2 如下操作:


① 新建配置文件/opt/mongodb/security/shard10002.conf添加如下内容:



dbpath=/data/shard10002

shardsvr=true

replSet=shard1

fork = true

port=10001

oplogSize=100

logpath=/opt/mongodb/log/shard10002.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


② 新建配置文件/opt/mongodb/security/shard20002.conf添加如下内容:


dbpath=/data/shard20002

shardsvr=true

replSet=shard2

fork = true

port=10002

oplogSize=100

logpath=/opt/mongodb/log/shard20002.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


③ 新建配置文件 /opt/mongodb/security/shard30002.conf 添加如下内容:



dbpath=/data/shard30002

shardsvr=true

replSet=shard3

shardsvr=true

fork = true

port=10003

oplogSize=100

logpath=/opt/mongodb/log/shard30002.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key



④ 新建配置文件 /opt/mongodb/security/config2.conf 添加如下内容:



dbpath=/data/config2

configsvr=true

fork = true

port=20000

oplogSize=5

logpath=/opt/mongodb/log/config2.log

quiet=true

keyFile=/opt/mongodb/security/mongo.key


⑤ 新建配置文件 /opt/mongodb/security/mongos2.conf 添加如下内容:



configdb=10.15.62.202:20000,10.15.62.203:20000,10.15.62.205:20000

port=30000

fork = true

chunkSize=5

logpath=/opt/mongodb/log/mongos1.log

quiet=true

keyFile=/opt/mongodb/security/mongo.key



Server3 如下操作:



1) 新建配置文件/opt/mongodb/security/shard10003.conf添加如下内容:


dbpath=/data/shard10003

shardsvr=true

replSet=shard1

fork = true

port=10001

oplogSize=100

logpath=/opt/mongodb/log/shard10003.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


2) 新建配置文件/opt/mongodb/security/shard20003.conf添加如下内容:



dbpath=/data/shard20003

shardsvr=true

replSet=shard2

fork = true

port=10002

oplogSize=100

logpath=/opt/mongodb/log/shard20003.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key



3) 新建配置文件 /opt/mongodb/security/shard30003.conf添加如下内容:



dbpath=/data/shard30003

shardsvr=true

replSet=shard3

fork = true

port=10003

oplogSize=100

logpath=/opt/mongodb/log/shard30003.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


4) 新建配置文件 /opt/mongodb/security/config3.conf添加如下内容:



bpath=/data/config3

configsvr=true

fork = true

port=20000

oplogSize=5

logpath=/opt/mongodb/log/config3.log

quiet=true

keyFile=/opt/mongodb/security/mongo.key



5) 新建配置文件 /opt/mongodb/security/mongos3.conf 添加如下内容:



configdb=10.15.62.202:20000,10.15.62.203:20000,10.15.62.205:20000

port=30000

fork = true

chunkSize=5

logpath=/opt/mongodb/log/mongos1.log

quiet=true

keyFile=/opt/mongodb/security/mongo.key




注意:根据mongodb官方配置,在安全auth认证方面,keyFile的优先级高于使用用户和密码认证,在初始化replication set之前需要关闭认证,而且开启了keyFile认证后已经打开了认证,然后添加管理用户,然后在开启keyFile认证即可,Authentication is disabled by default. To enable authentication for a given mongod or mongos instance, use the auth and keyFile configuration settings


初始化之前先关闭keyFile


#sed -i s/keyFile/#keyFile//opt/mongodb/security/*.conf


7.2.4 启动sharding服务:

l Server1


#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard10001.conf

#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard20001.conf

#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard30001.conf


l Server2


#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard10002.conf

#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard20002.conf

# /opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard30002.conf


l Server3


#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard10003.conf

#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard20003.conf

#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard30003.conf


7.2.5 启动config服务:


l Server1


#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/config1.conf


l Server2


#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/config2.conf


l Server3


#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/config3.conf


7.2.6启动mongos服务:


l Server1


/opt/mongodb/bin/mongos  --config=/opt/mongodb/security/mongos1.conf


l Server2


/opt/mongodb/bin/mongos  --config=/opt/mongodb/security/mongos2.conf


l Server3


/opt/mongodb/bin/mongos  --config= /opt/mongodb/security/mongos3.conf






7.2.7 初始化replica set


使用mongo连接其中任意mongod,这里使用server1


root@debian:~# /opt/mongodb/bin/mongo 10.15.62.202:10001/admin

MongoDB shell version: 2.4.6

connecting to: 10.15.62.202:10001/admin

> db

admin

> config={_id:"shard1",members:[{_id:0,host:"10.15.62.202:10001"},{_id:1,host:"10.15.62.203:10001"},{_id:2,host:"10.15.62.205:10001"}]}

{

       "_id" : "shard1",

       "members" : [

               {

                       "_id" : 0,

                       "host" : "10.15.62.202:10001"

               },

               {

                       "_id" : 1,

                       "host" : "10.15.62.203:10001"

               },

               {

                       "_id" : 2,

                       "host" : "10.15.62.205:10001"

               }

       ]

}

> rs.initiate(config)

{

       "info" : "Config now saved locally.  Should come online in about a minute.",

       "ok" : 1

}

> rs.status()

{

       "set" : "shard1",

       "date" : ISODate("2013-09-24T05:12:29Z"),

       "myState" : 1,

       "members" : [

               {

                       "_id" : 0,

                       "name" : "10.15.62.202:10001",

                       "health" : 1,

                       "state" : 1,

"stateStr" : "PRIMARY",

                       "uptime" : 454,

                       "optime" : Timestamp(1379999452, 1),

                       "optimeDate" : ISODate("2013-09-24T05:10:52Z"),

                       "self" : true

               },

               {

                       "_id" : 1,

                       "name" : "10.15.62.203:10001",

                       "health" : 1,

                       "state" : 2,

                       "stateStr" : "SECONDARY",

                       "uptime" : 94,

                       "optime" : Timestamp(1379999452, 1),

                       "optimeDate" : ISODate("2013-09-24T05:10:52Z"),

                       "lastHeartbeat" : ISODate("2013-09-24T05:12:29Z"),

                       "lastHeartbeatRecv" : ISODate("2013-09-24T05:12:28Z"),

                       "pingMs" : 0,

                       "syncingTo" : "10.15.62.202:10001"

               },

               {

                       "_id" : 2,

                       "name" : "10.15.62.205:10001",

                       "health" : 1,

                       "state" : 2,

                       "stateStr" : "SECONDARY",

                       "uptime" : 94,

                       "optime" : Timestamp(1379999452, 1),

                       "optimeDate" : ISODate("2013-09-24T05:10:52Z"),

                       "lastHeartbeat" : ISODate("2013-09-24T05:12:29Z"),

                       "lastHeartbeatRecv" : ISODate("2013-09-24T05:12:29Z"),

                       "pingMs" : 0,

                       "syncingTo" : "10.15.62.202:10001"

               }

       ],

       "ok" : 1

}

shard1:PRIMARY>



按照此方式依次添加另外的replication set 副本集:



副本二:


root@debian:~# /opt/mongodb/bin/mongo 10.15.62.202:10002/admin

MongoDB shell version: 2.4.6

connecting to: 10.15.62.202:10002/admin

> config={_id:"shard2",members:[{_id:0,host:"10.15.62.202:10002"},{_id:1,host:"10.15.62.203:10002"},{_id:2,host:"10.15.62.205:10002"}]}

{

       "_id" : "shard2",

       "members" : [

               {

                       "_id" : 0,

                       "host" : "10.15.62.202:10002"

               },

               {

                       "_id" : 1,

                       "host" : "10.15.62.203:10002"

               },

               {

                       "_id" : 2,

                       "host" : "10.15.62.205:10002"

               }

       ]

}

> rs.initiate(config)

{

       "info" : "Config now saved locally.  Should come online in about a minute.",

       "ok" : 1

}

shard2:PRIMARY> rs.status()

{

       "set" : "shard2",

       "date" : ISODate("2013-09-24T05:30:40Z"),

       "myState" : 1,

       "members" : [

               {

                       "_id" : 0,

                       "name" : "10.15.62.202:10002",

                       "health" : 1,

                       "state" : 1,

"stateStr" : "PRIMARY",

                       "uptime" : 223,

                       "optime" : Timestamp(1380000589, 1),

                       "optimeDate" : ISODate("2013-09-24T05:29:49Z"),

                       "self" : true

               },

               {

                       "_id" : 1,

                       "name" : "10.15.62.203:10002",

                       "health" : 1,

                       "state" : 2,

                       "stateStr" : "SECONDARY",

                       "uptime" : 41,

                       "optime" : Timestamp(1380000589, 1),

                       "optimeDate" : ISODate("2013-09-24T05:29:49Z"),

                       "lastHeartbeat" : ISODate("2013-09-24T05:30:39Z"),

                       "lastHeartbeatRecv" : ISODate("2013-09-24T05:30:39Z"),

                       "pingMs" : 0,

                       "syncingTo" : "10.15.62.202:10002"

               },

               {

                       "_id" : 2,

                       "name" : "10.15.62.205:10002",

                       "health" : 1,

                       "state" : 2,

                       "stateStr" : "SECONDARY",

                       "uptime" : 41,

                       "optime" : Timestamp(1380000589, 1),

                       "optimeDate" : ISODate("2013-09-24T05:29:49Z"),

                       "lastHeartbeat" : ISODate("2013-09-24T05:30:39Z"),

                       "lastHeartbeatRecv" : ISODate("2013-09-24T05:30:39Z"),

                       "pingMs" : 0,

                       "syncingTo" : "10.15.62.202:10002"

               }

       ],

       "ok" : 1

}

shard2:PRIMARY> exit



副本三:


root@debian:~# /opt/mongodb/bin/mongo 10.15.62.202:10003/admin

MongoDB shell version: 2.4.6

connecting to: 10.15.62.202:10003/admin

> config={_id:"shard3",members:[{_id:0,host:"10.15.62.202:10003"},{_id:1,host:"10.15.62.203:10003"},{_id:2,host:"10.15.62.205:10003"}]}

{

       "_id" : "shard3",

       "members" : [

               {

                       "_id" : 0,

                       "host" : "10.15.62.202:10003"

               },

               {

                       "_id" : 1,

                       "host" : "10.15.62.203:10003"

               },

               {

                       "_id" : 2,

                       "host" : "10.15.62.205:10003"

               }

       ]

}

> rs.initiate(config)

{

       "info" : "Config now saved locally.  Should come online in about a minute.",

       "ok" : 1

}

>

shard3:PRIMARY> rs.status()

{

       "set" : "shard3",

       "date" : ISODate("2013-09-24T05:42:43Z"),

       "myState" : 1,

       "members" : [

               {

                       "_id" : 0,

                       "name" : "10.15.62.202:10003",

                       "health" : 1,

                       "state" : 1,

                       "stateStr" : "PRIMARY",

                       "uptime" : 930,

                       "optime" : Timestamp(1380001270, 1),

                       "optimeDate" : ISODate("2013-09-24T05:41:10Z"),

                       "self" : true

               },

               {

                       "_id" : 1,

                       "name" : "10.15.62.203:10003",

                       "health" : 1,

                       "state" : 2,

                       "stateStr" : "SECONDARY",

                       "uptime" : 90,

                       "optime" : Timestamp(1380001270, 1),

                       "optimeDate" : ISODate("2013-09-24T05:41:10Z"),

                       "lastHeartbeat" : ISODate("2013-09-24T05:42:43Z"),

                       "lastHeartbeatRecv" : ISODate("2013-09-24T05:42:41Z"),

                       "pingMs" : 0,

                       "syncingTo" : "10.15.62.202:10003"

               },

               {

                       "_id" : 2,

                       "name" : "10.15.62.205:10003",

                       "health" : 1,

                       "state" : 2,

                       "stateStr" : "SECONDARY",

                       "uptime" : 90,

                       "optime" : Timestamp(1380001270, 1),

                       "optimeDate" : ISODate("2013-09-24T05:41:10Z"),

                       "lastHeartbeat" : ISODate("2013-09-24T05:42:43Z"),

                       "lastHeartbeatRecv" : ISODate("2013-09-24T05:42:41Z"),

                       "pingMs" : 0,

                       "syncingTo" : "10.15.62.202:10003"

               }

       ],

       "ok" : 1

}shard3:SECONDARY> exit





7.2.8日志查看主从选举过程:


#more /opt/mongodb/log/shard10001.log


Tue Sep 24 13:10:51.831 [conn1] replSet replSetInitiate admin command received from client

Tue Sep 24 13:10:51.852 [conn1] replSet replSetInitiate config object parses ok, 3 members specified

Tue Sep 24 13:10:52.154 [conn1] replSet replSetInitiate all members seem up

Tue Sep 24 13:10:52.154 [conn1] ******

Tue Sep 24 13:10:52.154 [conn1] creating replication oplog of size: 100MB...

Tue Sep 24 13:10:52.160 [FileAllocator] allocating new datafile /data/shard10001/local.1, filling with zeroes...

Tue Sep 24 13:10:52.160 [FileAllocator] creating directory /data/shard10001/_tmp

Tue Sep 24 13:10:52.175 [FileAllocator] done allocating datafile /data/shard10001/local.1, size: 128MB,  took 0.013 secs

Tue Sep 24 13:10:52.176 [conn1] ******

Tue Sep 24 13:10:52.176 [conn1] replSet info saving a newer config version to local.system.replset

Tue Sep 24 13:10:52.178 [conn1] replSet saveConfigLocally done

Tue Sep 24 13:10:52.178 [conn1] replSet replSetInitiate config now saved locally.  Should come online in about a minute.

#初始化开始


Tue Sep 24 13:10:52.178 [conn1] command admin.$cmd command: { replSetInitiate: { _id: "shard1", members: [ { _id: 0.0, host: "10.15.62.202:10001" }, { _id: 1.0, host:

"10.15.62.203:10001" }, { _id: 2.0, host: "10.15.62.205:10001" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:29356 reslen:112 347ms


#成员检测


Tue Sep 24 13:10:55.450 [rsStart] replSet I am 10.15.62.202:10001

Tue Sep 24 13:10:55.451 [rsStart] replSet STARTUP2

Tue Sep 24 13:10:55.456 [rsHealthPoll] replSet member 10.15.62.203:10001 is up

Tue Sep 24 13:10:55.457 [rsHealthPoll] replSet member 10.15.62.205:10001 is up



Tue Sep 24 13:10:56.457 [rsSync] replSet SECONDARY

Tue Sep 24 13:10:57.469 [rsHealthPoll] replset info 10.15.62.205:10001 thinks that we are down

Tue Sep 24 13:10:57.469 [rsHealthPoll] replSet member 10.15.62.205:10001 is now in state STARTUP2

Tue Sep 24 13:10:57.470 [rsMgr] not electing self, 10.15.62.205:10001 would veto with 'I don't think 10.15.62.202:10001 is electable'

Tue Sep 24 13:11:03.473 [rsMgr] replSet info electSelf 0

Tue Sep 24 13:11:04.459 [rsMgr] replSet PRIMARY

Tue Sep 24 13:11:05.473 [rsHealthPoll] replset info 10.15.62.203:10001 thinks that we are down

Tue Sep 24 13:11:05.473 [rsHealthPoll] replSet member 10.15.62.203:10001 is now in state STARTUP2

Tue Sep 24 13:11:05.473 [rsHealthPoll] replSet member 10.15.62.205:10001 is now in state RECOVERING

Tue Sep 24 13:11:13.188 [conn7] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:5 r:7 reslen:124 12ms

Tue Sep 24 13:11:14.146 [conn8] query local.oplog.rs query: { ts: { $gte: Timestamp 1379999452000|1 } } cursorid:1511004138438811 ntoreturn:0 ntoskip:0 nscanned:1 keyU

pdates:0 locks(micros) r:9293 nreturned:1 reslen:106 9ms

Tue Sep 24 13:11:15.233 [slaveTracking] build index local.slaves { _id: 1 }

Tue Sep 24 13:11:15.239 [slaveTracking] build index done.  scanned 0 total records. 0.005 secs

#主从选举结束


Tue Sep 24 13:11:15.240 [slaveTracking] update local.slaves query: { _id: ObjectId('52411eed2f0c855af923ffb1'), config: { _id: 2, host: "10.15.62.205:10001" }, ns: "lo

cal.oplog.rs" } update: { $set: { syncedTo: Timestamp 1379999452000|1 } } nscanned:0 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) w:14593 14ms

Tue Sep 24 13:11:15.478 [rsHealthPoll] replSet member 10.15.62.205:10001 is now in state SECONDARY

Tue Sep 24 13:11:23.486 [rsHealthPoll] replSet member 10.15.62.203:10001 is now in state RECOVERING

Tue Sep 24 13:11:25.487 [rsHealthPoll] replSet member 10.15.62.203:10001 is now in state SECONDARY


7.3 添加数据库管理用户,开启路由功能,并且分片:


7.3.1 创建超级用户:


root@debian:~# /opt/mongodb/bin/mongo 10.15.62.202:30000/admin

MongoDB shell version: 2.4.6

connecting to: 10.15.62.202:30000/admin

mongos> db.addUser({user:"clusterAdmin",pwd:"pwd",roles:["clusterAdmin","userAdminAnyDatabase","readWriteAnyDatabase"]});

{

       "user" : "clusterAdmin",

       "pwd" : "6f8d1d5a17d65fd6b632cdb0cb541466",

       "roles" : [

               "clusterAdmin",

               "userAdminAnyDatabase",

               "readWriteAnyDatabase"

       ],

       "_id" : ObjectId("52412ec4eb1bcd32b5a25ad2")

}


注意:userAdminAnyDatabase角色只能访问admin数据库,主要用于添加修改其他用户角色,无法认证通过其它数据库,且删除shard中的成员时认证通不过,这个在后面会有验证


7.3.2 开启mongo路由功能,分片


mongos> db

admin

mongos> db.runCommand({addshard:"shard1/10.15.62.202:10001,10.15.62.203:10001,10.15.62.205:10001",name:"shard1",maxsize:20480})

{ "shardAdded" : "shard1", "ok" : 1 }

mongos> db.runCommand({addshard:"shard2/10.15.62.202:10002,10.15.62.203:10002,10.15.62.205:10002",name:"shard2",maxsize:20480})

{ "shardAdded" : "shard2", "ok" : 1 }

mongos>

mongos> db.runCommand({addshard:"shard3/10.15.62.202:10003,10.15.62.203:10003,10.15.62.205:10003",name:"shard3",maxsize:20480})

{ "shardAdded" : "shard3", "ok" : 1 }

mongos>


7.3.3检查分片情况:


mongos> db.runCommand({listshards:1})

{

       "shards" : [

               {

                       "_id" : "shard1",

                       "host" : "shard1/10.15.62.202:10001,10.15.62.203:10001,10.15.62.205:10001"

               },

               {

                       "_id" : "shard2",

                       "host" : "shard2/10.15.62.202:10002,10.15.62.203:10002,10.15.62.205:10002"

               },

               {

                       "_id" : "shard3",

                       "host" : "shard3/10.15.62.202:10003,10.15.62.203:10003,10.15.62.205:10003"

               }

       ],

       "ok" : 1

}


可以看到3个分片信息正常


7.3.4 激活数据库分片:


> db.runCommand( { enablesharding : <dbname>} );


通过执行以上命令,可以让数据库跨shard,如果不执行这步,数据库只会存放在一个shard,一旦激活数据库分片,数据库中不同的collection将被存放在不同的shard上,但一个collection仍旧存放在同一个shard上,要使单个collection也分片,还需单独对collection作些操作


7.3.5  Collecton分片


要使单个collection也分片存储,需要给collection指定一个分片key,通过以下命令操作:

> db.runCommand( { shardcollection : <namespace>,key : <shardkeypatternobject> });



注:

a. 分片的collection系统会自动创建一个索引(也可用户提前创建好)

b. 分片的collection只能有一个在分片key上的唯一索引,其它唯一索引不被允许