mongo复制集分片集(亲测)
Posted 天宇星空
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了mongo复制集分片集(亲测)相关的知识,希望对你有一定的参考价值。
1.1 架构思路:
192.168.50.131 192.168.50.131 192.168.50.132
mongos |
|
mongos |
|
mongos |
configsvr |
|
configsvr |
|
configsvr |
Shard1 |
|
Shard1 |
|
Shard1 |
Shard2 |
|
Shard2 |
|
Shard2 |
Shard3 |
|
Shard3 |
|
Shard3 |
1.2 安装使用虚拟机:
192.168.50.130,192.168.50.131,192.168.50.132
1.3 软件包:
mongodb-linux-x86_64-rhel62-3.4.0.tar.gz
1.4 创建软连接(三台机器)
ln -s /usr/local/ mongodb-linux-x86_64-rhel62-3.4.0/bin/mongo /usr/bin/mongo
ln -s /usr/local/ mongodb-linux-x86_64-rhel62-3.4.0/bin/mongos /usr/bin/mongos
ln -s /usr/local/ mongodb-linux-x86_64-rhel62-3.4.0/bin/mongod /usr/bin/mongod
1.5 创建数据和日志目录(三台机器)
mkdir -p /data/mongodb && cd /data/mongodb/ && mkdir -p conf/data conf/log mongos/log shard{1..3}/data shard{1..3}/log
1.6 创建configsvr(三台机器)
mongod --configsvr --replSet configset --dbpath /data/mongodb/conf/data --port 27100 --logpath /data/mongodb/conf/confdb.log --bind_ip 0.0.0.0 --fork
1.7 将configsvr初始化为复制集
mongo192.168.25.131:27100
config_replset={_id:"configset",members:[{_id:0,host:"192.168.25.130:27100"},{_id:1,host:"192.168.25.131:27100"},{_id:2,host:" 192.168.25.132:27100"}]}
rs.initiate(config_replset)
1.8 配置mongos(三台机器) 路由
mongos
--configdb configset/192.168.25.130:27100,192.168.25.131:27100,192.168.25.132:27100 --port 27200 --logpath /data/mongodb/mongos/mongos.log --fork
1.9 配置shard(三台机器) 分片
mongod --shardsvr --replSet shard1 --port 27001 --bind_ip 0.0.0.0 --dbpath /data/mongodb/shard1/data --logpath /data/mongodb/shard1/log/shard1.log --directoryperdb --fork
mongod --shardsvr --replSet shard2 --port 27002 --bind_ip 0.0.0.0 --dbpath /data/mongodb/shard2/data --logpath /data/mongodb/shard2/log/shard2.log --directoryperdb --fork
mongod --shardsvr --replSet shard3 --port 27003 --bind_ip 0.0.0.0 --dbpath /data/mongodb/shard3/data --logpath /data/mongodb/shard3/log/shard3.log --directoryperdb --fork
1.10 初始化shard复制集
1.10.1 第一台机器(192.168.25.130没有显示只当主节点,会选择登陆的机器为主节点)
mongo --port 27001
use admin
rs.initiate({
_id: ‘shard1‘,
members: [
{_id: 84, host: ‘192.168.25.130:27001‘},
{_id: 89, host: ‘192.168.25.131:27001‘},
{_id: 90, host: ‘192.168.25.132:27001‘}
]
});
1.10.2 第二台机器(192.168.25.131)
mongo --port 27002
use admin
rs.initiate({
_id: ‘shard2‘,
members: [
{_id: 84, host: ‘192.168.25.130:27002‘},
{_id: 89, host: ‘192.168.25.131:27002‘},
{_id: 90, host: ‘192.168.25.132:27002‘}
]
});
如果要设置仲裁节点的话:
config = {_id: ‘shard3‘, members: [{_id: 0, host: ‘192.168.10.202:27022‘},{_id:1, host: ‘192.168.10.204:27022‘},{_id: 2, host: ‘192.168.10.203:30001‘, arbiterOnly:true}]}
rs.initiate(config);
1.10.3 第三台机器(192.168.25.132)
mongo --port 27003
use admin
rs.initiate({
_id: ‘shard3‘,
members: [
{_id: 84, host: ‘192.168.25.130:27003‘},
{_id: 89, host: ‘192.168.25.131:27003‘},
{_id: 90, host: ‘192.168.25.132:27003‘}
]
});
1.11 在mongos中注册shard
Mongo –port 27200
use admin
db.runCommand({addShard: ‘shard1/192.168.25.130:27001,192.168.25.131:27001,192.168.25.132:27001‘});
db.runCommand({addShard: ‘shard2/192.168.25.130:27002,192.168.25.131:27002,192.168.25.132:27002‘});
db.runCommand({addShard: ‘shard3/192.168.25.130:27003,192.168.25.131:27003,192.168.25.132:27003‘});
查看shard
use admin
db.runCommand({listshards: 1});
{
"shards" : [
{
"_id" : "shard1",
"host" : "shard1/10.199.144.84:27001,10.199.144.89:27001"
},
{
"_id" : "shard2",
"host" : "shard2/10.199.144.89:27002,10.199.144.90:27002"
},
{
"_id" : "shard3",
"host" : "shard3/10.199.144.90:27003,10.199.144.84:27003"
}
],
"ok" : 1
}
1.12 插入数据测试
mongo --port 27200
use admin
db.runCommand({enablesharding: ‘dbtest‘});
db.runCommand({shardcollection: ‘dbtest.coll1‘, key: {id: 1}});
use dbtest
db.coll1.stats()
结果是数据分配不平均,因为id没hash
此处对sano1y库的testtb使用hash策略:
- mongos> use admin
- switched to db admin
- mongos> db.runCommand({"enablesharding":"sano1y"})
- { "ok" : 1 }
- mongos> db.runCommand({"shardcollection":"sano1y.testtb","key":{"_id":"hashed"}})
- { "collectionsharded" : "sano1y.testtb", "ok" : 1 }
目前为止,已经对sano1y库的testtb集合进行了shard配置。
测试:
- mongos> use sano1y
- switched to db sano1y
- mongos> for(i=0;i<100000;i++) {db.testtb.insert({"id":i,"name":"test_hash"});}
稍等片刻,等待插入完毕:
- WriteResult({ "nInserted" : 1 })
进入shard1(187)的PRIMARY实例检查
- shard1:PRIMARY> use sano1y
- switched to db sano1y
- shard1:PRIMARY> db.testtb.find().count()
- 49983
- shard1:PRIMARY> db.testtb.find()
- { "_id" : ObjectId("5837ef1dea1fd54fb38d845c"), "id" : 0, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d845d"), "id" : 1, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d845e"), "id" : 2, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d8460"), "id" : 4, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d8461"), "id" : 5, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d8465"), "id" : 9, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d8468"), "id" : 12, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d846f"), "id" : 19, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d8471"), "id" : 21, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d8475"), "id" : 25, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d8476"), "id" : 26, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d8479"), "id" : 29, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d847d"), "id" : 33, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d847e"), "id" : 34, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d8480"), "id" : 36, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d8481"), "id" : 37, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d8483"), "id" : 39, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d8486"), "id" : 42, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d848b"), "id" : 47, "name" : "test_hash" }
- { "_id" : ObjectId("5837ef1dea1fd54fb38d848d"), "id" : 49, "name" : "test_hash" }
另外如果到shard2可以看到shard1这些不连续的id。
可发现shard1和2中的document数量,还是比较均匀的。
shard1: 33320
shard2: 33421
Shard3: 33259
可以看出,count数量基本还是保持平衡的,又很小的落差
以上是关于mongo复制集分片集(亲测)的主要内容,如果未能解决你的问题,请参考以下文章