手把手超详细Docker部署MongoDB集群
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了手把手超详细Docker部署MongoDB集群相关的知识,希望对你有一定的参考价值。
参考技术A mongodb 集群搭建的方式有三种: 首先我们先来了解一下Mongo集群的概念,Mongo集群有3个主要组件
ConfigServer:在集群中扮演存储整个集群的配置信息,负责配置存储,如果需要高可用的ConfigServer那么需要3个节点。
Shard:分片,存储真实的数据,每一个Shard分片都负责存储集群中的数据,例如一个集群有3个分片,然后我们定义分片规则为哈希,那么整个集群的数据就会(分割)到3个分片中的某一个分片,那么分片是特别重要的,如果集群中的一个分片全部崩溃了那么集群将不可用,所以我们要保证集群的高可用,那么我们需要一个分片配置3个节点,2个副本集一个仲裁节点,仲裁节点类似于Redis的哨兵模式,如果发现主节点挂了那么让另一个副本集进行数据存储。
Mongos:Mongos我们可以理解为整个集群的入口,类似于Kafka的Broker代理,也就是客户端,我们通过客户端连接集群进行查询。
下面是MongoDB的官方集群架构图,我们看到Mongos是一个路由,他们的信息都存储在ConfigServer中,我们通过Mongos进行添加,然后根据条件将数据进行分片到分片的副本集中
首先我们搭建两个config-server
创建两个config-server的配置文件
然后配置文件中配置端口
然后启动容器
然后进入容器初始化
如果ok为1表示成功
下面我们给每个server创建2个分片
创建挂载文件
创建配置文件
然后启动容器
进入第一个分片
进入第二个分片
创建挂载文件
然后启动Mongo
mongo添加分片组
新建数据启用分片
那么我们先来总结一下我们搭建一个高可用集群需要多少个Mongo
mongos : 3台
configserver : 3台
shard : 3片
每一片shard 分别 部署两个副本集和一个仲裁节点 : 3台
那么就是 3 + 3 + 3 * 3 = 15 台,我这里演示采用3台服务器
114.67.80.169 4核16g 部署一个configserver,一个mongos,2个分片组
182.61.2.16 2核4g 部署一个configserver,一个mongos,1个分片组
106.12.113.62 1核2g 部署一个configserver,一个mongos,不搭建分片组
由于此处服务器原因所以不是均衡分布,请根据自身实际情况搭建
我们先来搭建ConfigServer,因为我们知道搭建的话一定要高可用而且一定要权限这里mongo之间通信采用秘钥文件,所以我们先进行生成
创建挂载文件目录
写入配置文件
然后生成keyFile
文件如下,我们,之后我们所以key都采用这个(请采用自己生成的key)
写入key文件
然后启动config-server1容器
创建挂载文件目录
写入配置文件
写入配置文件
文件如下,我们,之后我们所以key都采用这个(请采用自己生成的key)
写入key文件
然后启动config-server2容器
创建挂载文件目录
写入配置文件
文件如下,我们,之后我们所以key都采用这个(请采用自己生成的key)
写入key文件
然后启动config-server3容器
进入第一台容器
输入
如果返回ok则成功
然后我们创建用户
由于mongos是客户端,所以我们先搭建好config以及shard之后再搭建mongos。
在同一台服务器上初始化一组分片
创建挂载文件
配置配置文件
创建keyfile
运行shard1分片组
并且制定第三个副本集为仲裁节点
返回ok后创建用户
然后退出,分片组1搭建完成
在同一台服务器上初始化一组分片
创建挂载文件
配置配置文件
创建keyfile
运行shard2分片组
并且制定第三个副本集为仲裁节点
返回ok后创建用户
然后退出,分片组2搭建完成
在同一台服务器上初始化一组分片
创建挂载文件
配置配置文件
创建keyfile
运行shard3分片组
并且制定第三个副本集为仲裁节点
返回ok后创建用户
然后退出,分片组3搭建完成
创建配置文件
填入配置文件,这里我们删除了认证的信息,因为mongos是不能设置认证的,他也是用的前面使用的密码即可,如configserver的密码
创建keyfile
运行mongos1
创建配置文件
填入配置文件,这里我们删除了认证的信息,因为mongos是不能设置认证的,他也是用的前面使用的密码即可,如configserver的密码
创建keyfile
运行mongos2
创建配置文件
填入配置文件,这里我们删除了认证的信息,因为mongos是不能设置认证的,他也是用的前面使用的密码即可,如configserver的密码
创建keyfile
运行mongos3
进入第一台mongos
先登录(使用前面设置的root用户密码)
进行配置分片信息
全部返回ok则成功
去其他两台mongos执行
mongos2
mongos3
创建用户
插入数据
openssl rand -base64 756 > mongo.key
清空server1两个分片数据
清空server2两个分片数据
centos7部署Mongodb复制集结合分片(超详细)
Mongodb复制集结合分片
重点:概述、原理、实施案例
一、概述:
概述:
分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程。分片集群(sharded cluster)是一种水平扩展数据库系统性能的方法,能够将数据集分布式存储在不同的分片(shard)上,每个分片只保存数据集的一部分,MongoDB保证各个分片之间不会有重复的数据,所有分片保存的数据之和就是完整的数据集。分片集群将数据集分布式存储,能够将负载分摊到多个分片上,每个分片只负责读写一部分数据,充分利用了各个shard的系统资源,提高数据库系统的吞吐量。
注:mongodb3.2版本后,分片技术必须结合复制集完成;
应用场景:
1.单台机器的磁盘不够用了,使用分片解决磁盘空间的问题。
2.单个mongod已经不能满足写数据的性能要求。通过分片让写压力分散到各个分片上面,使用分片服务器自身的资源。
3.想把大量数据放到内存里提高性能。和上面一样,通过分片使用分片服务器自身的资源。
二、原理:
存储方式:数据集被拆分成数据块(chunk),每个数据块包含多个doc,数据块分布式存储在分片集群中。
角色:
Config server:MongoDB负责追踪数据块在shard上的分布信息,每个分片存储哪些数据块,叫做分片的元数据,保存在config server上的数据库 config中,一般使用3台config server,所有config server中的config数据库必须完全相同(建议将config server部署在不同的服务器,以保证稳定性);
Shard server:将数据进行分片,拆分成数据块(chunk),数据块真正存放的单位;
Mongos server:数据库集群请求的入口,所有的请求都通过mongos进行协调,查看分片的元数据,查找chunk存放位置,mongos自己就是一个请求分发中心,在生产环境通常有多mongos作为请求的入口,防止其中一个挂掉所有的mongodb请求都没有办法操作。
总结:
应用请求mongos来操作mongodb的增删改查,配置服务器存储数据库元信息,并且和mongos做同步,数据最终存入在shard(分片)上,为了防止数据丢失,同步在副本集中存储了一份,仲裁节点在数据存储到分片的时候决定存储到哪个节点。
三、案例实施:
实验环境:
192.168.100.101
config.benet.com 192.168.100.102
shard1.benet.com 192.168.100.103
shard2.benet.com
Mongos:27025 mongos:27025 mongos:27025
config(configs):27017 shard(shard1):27017 shard(shard2):27017
config(configs):27018 shard(shard1):27018 shard(shard2):27018
config(configs):27019 shard(shard1):27019 shard(shard2):27019
实验步骤:
? 安装mongodb服务;
? 配置config节点的实例;
? 配置shard1的实例:
? 配置shard2实例:
? 配置分片并验证:
? 安装mongodb服务:
192.168.100.101、192.168.100.102、192.168.100.103:
[[email protected] ~]# tar zxvf mongodb-linux-x86_64-rhel70-3.6.3.tgz
[[email protected] ~]# mv mongodb-linux-x86_64-rhel70-3.6.3 /usr/local/mongodb
[[email protected] ~]# echo "export PATH=/usr/local/mongodb/bin:$PATH" >>/etc/profile
[[email protected] ~]# source /etc/profile
[[email protected] ~]# ulimit -n 25000
[[email protected] ~]# ulimit -u 25000
[[email protected] ~]# echo 0 >/proc/sys/vm/zone_reclaim_mode
[[email protected] ~]# sysctl -w vm.zone_reclaim_mode=0
[[email protected] ~]# echo never >/sys/kernel/mm/transparent_hugepage/enabled
[[email protected] ~]# echo never >/sys/kernel/mm/transparent_hugepage/defrag
[[email protected] ~]# cd /usr/local/mongodb/bin/
[[email protected] bin]# mkdir {../mongodb1,../mongodb2,../mongodb3}
[[email protected] bin]# mkdir ../logs
[[email protected] bin]# touch ../logs/mongodb{1..3}.log
[[email protected] bin]# chmod 777 ../logs/mongodb*
? 配置config节点的实例:
192.168.100.101:
[[email protected] bin]# cat <<END >>/usr/local/mongodb/bin/mongodb1.conf
bind_ip=192.168.100.101
port=27017
dbpath=/usr/local/mongodb/mongodb1/
logpath=/usr/local/mongodb/logs/mongodb1.log
logappend=true
fork=true
maxConns=5000
replSet=configs
#replication name
configsvr=true
END
[[email protected] bin]# cat <<END >>/usr/local/mongodb/bin/mongodb2.conf
bind_ip=192.168.100.101
port=27018
dbpath=/usr/local/mongodb/mongodb2/
logpath=/usr/local/mongodb/logs/mongodb2.log
logappend=true
fork=true
maxConns=5000
replSet=configs
configsvr=true
END
[[email protected] bin]# cat <<END >>/usr/local/mongodb/bin/mongodb3.conf
bind_ip=192.168.100.101
port=27019
dbpath=/usr/local/mongodb/mongodb3/
logpath=/usr/local/mongodb/logs/mongodb3.log
logappend=true
fork=true
maxConns=5000
replSet=configs
configsvr=true
END
[[email protected] bin]# cd
[[email protected] ~]# mongod -f /usr/local/mongodb/bin/mongodb1.conf
[[email protected] ~]# mongod -f /usr/local/mongodb/bin/mongodb2.conf
[[email protected] ~]# mongod -f /usr/local/mongodb/bin/mongodb3.conf
[[email protected] ~]# netstat -utpln |grep mongod
tcp 0 0 192.168.100.101:27019 0.0.0.0: LISTEN 2271/mongod
tcp 0 0 192.168.100.101:27017 0.0.0.0: LISTEN 2440/mongod
tcp 0 0 192.168.100.101:27018 0.0.0.0:* LISTEN 1412/mongod
[[email protected] ~]# echo -e "/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb1.conf
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb2.conf
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb3.conf">>/etc/rc.local
[[email protected] ~]# chmod +x /etc/rc.local
[[email protected] ~]# cat <<END >>/etc/init.d/mongodb
#!/bin/bash
INSTANCE=$1
ACTION=$2
case "$ACTION" in
‘start‘)
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"$INSTANCE".conf;;
‘stop‘)
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"$INSTANCE".conf --shutdown;;
‘restart‘)
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"$INSTANCE".conf --shutdown
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"$INSTANCE".conf;;
esac
END
[[email protected] ~]# chmod +x /etc/init.d/mongodb
[[email protected] ~]# mongo --port 27017 --host 192.168.100.101
cfg={"_id":"configs","members":[{"_id":0,"host":"192.168.100.101:27017"},{"_id":1,"host":"192.168.100.101:27018"},{"_id":2,"host":"192.168.100.101:27019"}]}
rs.initiate(cfg)
configs:PRIMARY> rs.status()
{
"set" : "configs",
"date" : ISODate("2018-04-24T18:53:44.375Z"),
"myState" : 1,
"term" : NumberLong(1),
"configsvr" : true,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1524596020, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1524596020, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1524596020, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1524596020, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.100.101:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 6698,
"optime" : {
"ts" : Timestamp(1524596020, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-24T18:53:40Z"),
"electionTime" : Timestamp(1524590293, 1),
"electionDate" : ISODate("2018-04-24T17:18:13Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.100.101:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 5741,
"optime" : {
"ts" : Timestamp(1524596020, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1524596020, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-24T18:53:40Z"),
"optimeDurableDate" : ISODate("2018-04-24T18:53:40Z"),
"lastHeartbeat" : ISODate("2018-04-24T18:53:42.992Z"),
"lastHeartbeatRecv" : ISODate("2018-04-24T18:53:43.742Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.100.101:27017",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "192.168.100.101:27019",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 5741,
"optime" : {
"ts" : Timestamp(1524596020, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1524596020, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-24T18:53:40Z"),
"optimeDurableDate" : ISODate("2018-04-24T18:53:40Z"),
"lastHeartbeat" : ISODate("2018-04-24T18:53:42.992Z"),
"lastHeartbeatRecv" : ISODate("2018-04-24T18:53:43.710Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.100.101:27017",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1524596020, 1),
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000001")
},
"$clusterTime" : {
"clusterTime" : Timestamp(1524596020, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
configs:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
configs:PRIMARY> exit
[[email protected] bin]# cat <<END >>/usr/local/mongodb/bin/mongos.conf
bind_ip=192.168.100.101
port=27025
logpath=/usr/local/mongodb/logs/mongodbs.log
fork=true
maxConns=5000
configdb=configs/192.168.100.101:27017,192.168.100.101:27018,192.168.100.101:27019
END
注:mongos的configdb参数只能指定一个(复制集中的primary)或多个(复制集中的全部节点);
[[email protected] bin]# touch ../logs/mongos.log
[[email protected] bin]# chmod 777 ../logs/mongos.log
[[email protected] bin]# mongos -f /usr/local/mongodb/bin/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 1562
child process started successfully, parent exiting
[[email protected] ~]# netstat -utpln |grep mongo
tcp 0 0 192.168.100.101:27019 0.0.0.0: LISTEN 1601/mongod
tcp 0 0 192.168.100.101:27020 0.0.0.0: LISTEN 1345/mongod
tcp 0 0 192.168.100.101:27025 0.0.0.0: LISTEN 1822/mongos
tcp 0 0 192.168.100.101:27017 0.0.0.0: LISTEN 1437/mongod
tcp 0 0 192.168.100.101:27018 0.0.0.0:* LISTEN 1541/mongod
? 配置shard1的实例:
192.168.100.102:
[[email protected] bin]# cat <<END >>/usr/local/mongodb/bin/mongodb1.conf
bind_ip=192.168.100.102
port=27017
dbpath=/usr/local/mongodb/mongodb1/
logpath=/usr/local/mongodb/logs/mongodb1.log
logappend=true
fork=true
maxConns=5000
replSet=shard1
#replication name
shardsvr=true
END
[[email protected] bin]# cat <<END >>/usr/local/mongodb/bin/mongodb2.conf
bind_ip=192.168.100.102
port=27018
dbpath=/usr/local/mongodb/mongodb2/
logpath=/usr/local/mongodb/logs/mongodb2.log
logappend=true
fork=true
maxConns=5000
replSet=shard1
shardsvr=true
END
[[email protected] bin]# cat <<END >>/usr/local/mongodb/bin/mongodb3.conf
bind_ip=192.168.100.102
port=27019
dbpath=/usr/local/mongodb/mongodb3/
logpath=/usr/local/mongodb/logs/mongodb3.log
logappend=true
fork=true
maxConns=5000
replSet=shard1
shardsvr=true
END
[[email protected] bin]# cd
[[email protected] ~]# mongod -f /usr/local/mongodb/bin/mongodb1.conf
[[email protected] ~]# mongod -f /usr/local/mongodb/bin/mongodb2.conf
[[email protected] ~]# mongod -f /usr/local/mongodb/bin/mongodb3.conf
[[email protected] ~]# netstat -utpln |grep mongod
tcp 0 0 192.168.100.101:27019 0.0.0.0: LISTEN 2271/mongod
tcp 0 0 192.168.100.101:27017 0.0.0.0: LISTEN 2440/mongod
tcp 0 0 192.168.100.101:27018 0.0.0.0:* LISTEN 1412/mongod
[[email protected] ~]# echo -e "/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb1.conf
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb2.conf
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb3.conf">>/etc/rc.local
[[email protected] ~]# chmod +x /etc/rc.local
[[email protected] ~]# cat <<END >>/etc/init.d/mongodb
#!/bin/bash
INSTANCE=$1
ACTION=$2
case "$ACTION" in
‘start‘)
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"$INSTANCE".conf;;
‘stop‘)
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"$INSTANCE".conf --shutdown;;
‘restart‘)
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"$INSTANCE".conf --shutdown
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"$INSTANCE".conf;;
esac
END
[[email protected] ~]# chmod +x /etc/init.d/mongodb
[[email protected] ~]# mongo --port 27017 --host 192.168.100.102
cfg={"_id":"shard1","members":[{"_id":0,"host":"192.168.100.102:27017"},{"_id":1,"host":"192.168.100.102:27018"},{"_id":2,"host":"192.168.100.102:27019"}]}
rs.initiate(cfg)
{ "ok" : 1 }
shard1:PRIMARY> rs.status()
{
"set" : "shard1",
"date" : ISODate("2018-04-24T19:06:53.160Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.100.102:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 6648,
"optime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-24T19:06:50Z"),
"electionTime" : Timestamp(1524590628, 1),
"electionDate" : ISODate("2018-04-24T17:23:48Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.100.102:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 6195,
"optime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-24T19:06:50Z"),
"optimeDurableDate" : ISODate("2018-04-24T19:06:50Z"),
"lastHeartbeat" : ISODate("2018-04-24T19:06:52.176Z"),
"lastHeartbeatRecv" : ISODate("2018-04-24T19:06:52.626Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.100.102:27017",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "192.168.100.102:27019",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 6195,
"optime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-24T19:06:50Z"),
"optimeDurableDate" : ISODate("2018-04-24T19:06:50Z"),
"lastHeartbeat" : ISODate("2018-04-24T19:06:52.177Z"),
"lastHeartbeatRecv" : ISODate("2018-04-24T19:06:52.626Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.100.102:27017",
"configVersion" : 1
}
],
"ok" : 1
}
shard1:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
shard1:PRIMARY> exit
[[email protected] bin]# cat <<END >>/usr/local/mongodb/bin/mongos.conf
bind_ip=192.168.100.102
port=27025
logpath=/usr/local/mongodb/logs/mongodbs.log
fork=true
maxConns=5000
configdb=configs/192.168.100.101:27017,192.168.100.101:27018,192.168.100.101:27019
END
[[email protected] bin]# touch ../logs/mongos.log
[[email protected] bin]# chmod 777 ../logs/mongos.log
[[email protected] bin]# mongos -f /usr/local/mongodb/bin/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 1562
child process started successfully, parent exiting
[[email protected] ~]# netstat -utpln| grep mongo
tcp 0 0 192.168.100.102:27019 0.0.0.0: LISTEN 1098/mongod
tcp 0 0 192.168.100.102:27020 0.0.0.0: LISTEN 1125/mongod
tcp 0 0 192.168.100.102:27025 0.0.0.0: LISTEN 1562/mongos
tcp 0 0 192.168.100.102:27017 0.0.0.0: LISTEN 1044/mongod
tcp 0 0 192.168.100.102:27018 0.0.0.0:* LISTEN 1071/mongod
? 配置shard2实例:
192.168.100.103:
[[email protected] bin]# cat <<END >>/usr/local/mongodb/bin/mongodb1.conf
bind_ip=192.168.100.103
port=27017
dbpath=/usr/local/mongodb/mongodb1/
logpath=/usr/local/mongodb/logs/mongodb1.log
logappend=true
fork=true
maxConns=5000
replSet=shard2
#replication name
shardsvr=true
END
[[email protected] bin]# cat <<END >>/usr/local/mongodb/bin/mongodb2.conf
bind_ip=192.168.100.103
port=27018
dbpath=/usr/local/mongodb/mongodb2/
logpath=/usr/local/mongodb/logs/mongodb2.log
logappend=true
fork=true
maxConns=5000
replSet=shard2
shardsvr=true
END
[[email protected] bin]# cat <<END >>/usr/local/mongodb/bin/mongodb3.conf
bind_ip=192.168.100.103
port=27019
dbpath=/usr/local/mongodb/mongodb3/
logpath=/usr/local/mongodb/logs/mongodb3.log
logappend=true
fork=true
maxConns=5000
replSet=shard2
shardsvr=true
END
[[email protected] bin]# cd
[[email protected] ~]# mongod -f /usr/local/mongodb/bin/mongodb1.conf
[[email protected] ~]# mongod -f /usr/local/mongodb/bin/mongodb2.conf
[[email protected] ~]# mongod -f /usr/local/mongodb/bin/mongodb3.conf
[[email protected] ~]# netstat -utpln |grep mongod
tcp 0 0 192.168.100.101:27019 0.0.0.0: LISTEN 2271/mongod
tcp 0 0 192.168.100.101:27017 0.0.0.0: LISTEN 2440/mongod
tcp 0 0 192.168.100.101:27018 0.0.0.0:* LISTEN 1412/mongod
[[email protected] ~]# echo -e "/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb1.conf
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb2.conf
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb3.conf">>/etc/rc.local
[[email protected] ~]# chmod +x /etc/rc.local
[[email protected] ~]# cat <<END >>/etc/init.d/mongodb
#!/bin/bash
INSTANCE=$1
ACTION=$2
case "$ACTION" in
‘start‘)
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"$INSTANCE".conf;;
‘stop‘)
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"$INSTANCE".conf --shutdown;;
‘restart‘)
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"$INSTANCE".conf --shutdown
/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"$INSTANCE".conf;;
esac
END
[[email protected] ~]# chmod +x /etc/init.d/mongodb
[[email protected] ~]# mongo --port 27017 --host 192.168.100.103
cfg={"_id":"shard2","members":[{"_id":0,"host":"192.168.100.103:27017"},{"_id":1,"host":"192.168.100.103:27018"},{"_id":2,"host":"192.168.100.103:27019"}]}
rs.initiate(cfg)
{ "ok" : 1 }
shard2:PRIMARY> rs.status()
{
"set" : "shard2",
"date" : ISODate("2018-04-24T19:06:53.160Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.100.103:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 6648,
"optime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-24T19:06:50Z"),
"electionTime" : Timestamp(1524590628, 1),
"electionDate" : ISODate("2018-04-24T17:23:48Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.100.103:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 6195,
"optime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-24T19:06:50Z"),
"optimeDurableDate" : ISODate("2018-04-24T19:06:50Z"),
"lastHeartbeat" : ISODate("2018-04-24T19:06:52.176Z"),
"lastHeartbeatRecv" : ISODate("2018-04-24T19:06:52.626Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.100.103:27017",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "192.168.100.103:27019",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 6195,
"optime" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1524596810, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-24T19:06:50Z"),
"optimeDurableDate" : ISODate("2018-04-24T19:06:50Z"),
"lastHeartbeat" : ISODate("2018-04-24T19:06:52.177Z"),
"lastHeartbeatRecv" : ISODate("2018-04-24T19:06:52.626Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.100.103:27017",
"configVersion" : 1
}
],
"ok" : 1
}
shard2:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
shard2:PRIMARY> exit
[[email protected] bin]# cat <<END >>/usr/local/mongodb/bin/mongos.conf
bind_ip=192.168.100.103
port=27025
logpath=/usr/local/mongodb/logs/mongodbs.log
fork=true
maxConns=5000
configdb=configs/192.168.100.101:27017,192.168.100.101:27018,192.168.100.101:27019
END
[[email protected] bin]# touch ../logs/mongos.log
[[email protected] bin]# chmod 777 ../logs/mongos.log
[[email protected] bin]# mongos -f /usr/local/mongodb/bin/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 1562
child process started successfully, parent exiting
[[email protected] ~]# netstat -utpln |grep mongo
tcp 0 0 192.168.100.103:27019 0.0.0.0: LISTEN 1095/mongod
tcp 0 0 192.168.100.103:27020 0.0.0.0: LISTEN 1122/mongod
tcp 0 0 192.168.100.103:27025 0.0.0.0: LISTEN 12122/mongos
tcp 0 0 192.168.100.103:27017 0.0.0.0: LISTEN 1041/mongod
tcp 0 0 192.168.100.103:27018 0.0.0.0:* LISTEN 1068/mongod
? 配置分片并验证:
192.168.100.101(随意选择mongos进行设置分片,三台mongos会同步以下操作):
[[email protected] ~]# mongo --port 27025 --host 192.168.100.101
mongos> use admin;
switched to db admin
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77")
}
shards:
active mongoses:
"3.6.3" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
mongos>
sh.addShard("shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019")
{
"shardAdded" : "shard1",
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1524598580, 9),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1524598580, 9)
}
mongos> sh.addShard("shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019")
{
"shardAdded" : "shard2",
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1524598657, 7),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1524598657, 7)
}
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77")
}
shards:
{ "_id" : "shard1", "host" : "shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019", "state" : 1 }
active mongoses:
"3.6.3" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
注:目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,但我们的目的是希望插入数据,数据能够自动分片。连接在mongos上,准备让指定的数据库、指定的集合分片生效。
[[email protected] ~]# mongo --port 27025 --host 192.168.100.101
mongos> use admin
mongos> sh.enableSharding("testdb") ##开启数据库的分片
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1524599672, 13),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1524599672, 13)
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77")
}
shards:
{ "_id" : "shard1", "host" : "shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019", "state" : 1 }
active mongoses:
"3.6.3" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shard1 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
{ "_id" : "testdb", "primary" : "shard2", "partitioned" : true }
mongos> db.runCommand({shardcollection:"testdb.table1", key:{_id:1}}); ##开启数据库中集合的分片
{
"collectionsharded" : "testdb.table1",
"collectionUUID" : UUID("883bb1e2-b218-41ab-8122-6a5cf4df5e7b"),
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1524601471, 14),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1524601471, 14)
}
mongos> use testdb;
mongos> for(i=1;i<=10000;i++){db.table1.insert({"id":i,"name":"huge"})};
WriteResult({ "nInserted" : 1 })
mongos> show collections
table1
mongos> db.table1.count()
10000
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77")
}
shards:
{ "_id" : "shard1", "host" : "shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019", "state" : 1 }
active mongoses:
"3.6.3" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shard1 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
{ "_id" : "testdb", "primary" : "shard2", "partitioned" : true }
testdb.table1
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shard2 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0)
mongos> use admin
switched to db admin
mongos> sh.enableSharding("testdb2")
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1524602371, 7),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1524602371, 7)
}
mongos> db.runCommand({shardcollection:"testdb2.table1", key:{_id:1}});
mongos> use testdb2
switched to db testdb2
mongos> for(i=1;i<=10000;i++){db.table1.insert({"id":i,"name":"huge"})};
WriteResult({ "nInserted" : 1 })
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77")
}
shards:
{ "_id" : "shard1", "host" : "shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019", "state" : 1 }
active mongoses:
"3.6.3" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shard1 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
{ "_id" : "testdb", "primary" : "shard2", "partitioned" : true }
testdb.table1
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shard2 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0)
{ "_id" : "testdb2", "primary" : "shard1", "partitioned" : true }
testdb2.table1
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shard1 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
mongos> db.table1.stats() ##查看集合的分片情况
{
"sharded" : true,
"capped" : false,
"ns" : "testdb2.table1",
"count" : 10000,
"size" : 490000,
"storageSize" : 167936,
"totalIndexSize" : 102400,
"indexSizes" : {
"id" : 102400
},
"avgObjSize" : 49,
"nindexes" : 1,
"nchunks" : 1,
"shards" : {
"shard1" : {
"ns" : "testdb2.table1",
"size" : 490000,
"count" : 10000,
"avgObjSize" : 49,
"storageSize" : 167936,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},
"creationString" :
...
在192.168.100.102和192.168.100.103上登录mongos节点查看上述配置,发现已经同步;
以上是关于手把手超详细Docker部署MongoDB集群的主要内容,如果未能解决你的问题,请参考以下文章