接近3w详解Docker搭建Redis集群(主从容错主从扩容主从缩容)
Posted 学Java的小熊
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了接近3w详解Docker搭建Redis集群(主从容错主从扩容主从缩容)相关的知识,希望对你有一定的参考价值。
1、场景
解决方案
1、哈希取余分区
优点:
简单粗暴,直接有效,只需要预估好数据规划好节点,例如3台、8台、10台,就能保证一段时间的数据支撑。使用Hash算法让固定的一部分请求落到同一台服务器上,这样每台服务器固定处理一部分请求(并维护这些请求的信息),起到负载均衡+分而治之的作用。
缺点:
原来规划好的节点,进行扩容或者缩容就比较麻烦了额,不管扩缩,每次数据变动导致节点有变动,映射关系需要重新进行计算,在服务器个数固定不变时没有问题,如果需要弹性扩容或故障停机的情况下,原来的取模公式就会发生变化:Hash(key)/3会变成Hash(key) /?。此时地址经过取余运算的结果将发生很大变化,根据公式获取的服务器也会变得不可控。
某个redis机器宕机了,由于台数数量变化,会导致hash取余全部数据重新洗牌。
2、一致性哈希算法分区
作用:
步骤:
-
算法构建一致性哈希环
一致性哈希环
-
服务器IP节点映射
节点映射
-
key落到服务器的落键规则
优点:
1.一致性哈希算法的容错性
2.一致性哈希算法的扩展性
缺点:
总结:
3、哈希槽分区
解决均匀分配的问题,在数据和节点之间又加入了一层,把这层称为哈希槽(slot),用于管理数据和节点之间的关系,现在就相当于节点上放的是槽,槽里放的是数据。
槽解决的是粒度问题,相当于把粒度变大了,这样便于数据移动。 哈希解决的是映射问题,使用key的哈希值来计算所在的槽,便于数据分配。
哈希槽的计算
2、3主3从集群配置
1、启动docker
systemctl start docker
2、新建6个docker容器redis实例
docker run -d --name redis-node-1 --net host --privileged=true -v /data/redis/share/redis-node-1:/data redis --cluster-enabled yes --appendonly yes --port 6381
docker run -d --name redis-node-2 --net host --privileged=true -v /data/redis/share/redis-node-2:/data redis --cluster-enabled yes --appendonly yes --port 6382
docker run -d --name redis-node-3 --net host --privileged=true -v /data/redis/share/redis-node-3:/data redis --cluster-enabled yes --appendonly yes --port 6383
docker run -d --name redis-node-4 --net host --privileged=true -v /data/redis/share/redis-node-4:/data redis --cluster-enabled yes --appendonly yes --port 6384
docker run -d --name redis-node-5 --net host --privileged=true -v /data/redis/share/redis-node-5:/data redis --cluster-enabled yes --appendonly yes --port 6385
docker run -d --name redis-node-6 --net host --privileged=true -v /data/redis/share/redis-node-6:/data redis --cluster-enabled yes --appendonly yes --port 6386
[root@docker ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d6fc3ef2855b redis "docker-entrypoint.s…" 3 seconds ago Up 2 seconds redis-node-6
9c8868d69a50 redis "docker-entrypoint.s…" 3 seconds ago Up 3 seconds redis-node-5
7fbb5345951a redis "docker-entrypoint.s…" 4 seconds ago Up 3 seconds redis-node-4
d53b9d5af1ac redis "docker-entrypoint.s…" 4 seconds ago Up 4 seconds redis-node-3
fe0e430cb940 redis "docker-entrypoint.s…" 6 seconds ago Up 4 seconds redis-node-2
ee03a7ec212e redis "docker-entrypoint.s…" 8 seconds ago Up 6 seconds redis-node-1
分步解释
docker run
:创建并运行docker容器实例--name redis-node-6
:容器名字--net host
:使用宿主机的IP和端口,默认、--privileged=true
:获取宿主机root用户权限-v /data/redis/share/redis-node-6:/data
:容器卷,宿主机地址:docker内部地址redis
:redis镜像和版本号--cluster-enabled yes
:开启redis集群--appendonly yes
:开启持久化--port 6386
:redis端口号
3、进入容器redis-node-1并为6台机器构建集群关系
1、进入容器
docker exec -it redis-node-1 /bin/bash
2、构建主从关系
PS:注意自己的真实IP地址
redis-cli --cluster create 192.168.130.132:6381 192.168.130.132:6382 192.168.130.132:6383 192.168.130.132:6384 192.168.130.132:6385 192.168.130.132:6386 --cluster-replicas 1
--cluster-replicas 1
:表示为每个master创建一个slave节点
[root@docker ~]# docker exec -it redis-node-1 /bin/bash
root@docker:/data# redis-cli --cluster create 192.168.130.132:6381 192.168.130.132:6382 192.168.130.132:6383 192.168.130.132:6384 192.168.130.132:6385 192.168.130.132:6386 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.130.132:6385 to 192.168.130.132:6381
Adding replica 192.168.130.132:6386 to 192.168.130.132:6382
Adding replica 192.168.130.132:6384 to 192.168.130.132:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
slots:[0-5460] (5461 slots) master
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
slots:[5461-10922] (5462 slots) master
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
slots:[10923-16383] (5461 slots) master
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
replicates 8dbe8b347410cf87d62933382b73693405535ba1
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
replicates 8335b5349d781c11745ee129f5dbae370dbd3394
Can I set the above configuration? (type yes to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
slots: (0 slots) slave
replicates 8dbe8b347410cf87d62933382b73693405535ba1
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
slots: (0 slots) slave
replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
slots: (0 slots) slave
replicates 8335b5349d781c11745ee129f5dbae370dbd3394
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data#
到这里,3主3从就构建完成了。
4、链接进入6381作为切入点,查看集群状态
root@docker:/data# redis-cli -p 6381
127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:663
cluster_stats_messages_pong_sent:671
cluster_stats_messages_sent:1334
cluster_stats_messages_ping_received:666
cluster_stats_messages_pong_received:663
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1334
127.0.0.1:6381> cluster nodes
b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385@16385 slave 8dbe8b347410cf87d62933382b73693405535ba1 0 1651152474000 3 connected
8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381@16381 myself,master - 0 1651152472000 1 connected 0-5460
8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383@16383 master - 0 1651152474000 3 connected 10923-16383
c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384@16384 slave 60fa7e084483feca3af41f269de5a57b526c0ad7 0 1651152476585 2 connected
60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382@16382 master - 0 1651152475573 2 connected 5461-10922
4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386@16386 slave 8335b5349d781c11745ee129f5dbae370dbd3394 0 1651152474566 1 connected
127.0.0.1:6381>
1、cluster info
:查看集群状态
127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:663
cluster_stats_messages_pong_sent:671
cluster_stats_messages_sent:1334
cluster_stats_messages_ping_received:666
cluster_stats_messages_pong_received:663
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1334
cluster_state
:ok
状态表示集群可以正常接受查询请求。fail
状态表示,至少有一个哈希槽没有被绑定(说明有哈希槽没有被绑定到任意一个节点),或者在错误的状态(节点可以提供服务但是带有FAIL 标记),或者该节点无法联系到多数master节点。.cluster_slots_assigned
: 已分配到集群节点的哈希槽数量(不是没有被绑定的数量)。16384个哈希槽全部被分配到集群节点是集群正常运行的必要条件.cluster_slots_ok
: 哈希槽状态不是FAIL
和PFAIL
的数量.cluster_slots_pfail
: 哈希槽状态是PFAIL
的数量。只要哈希槽状态没有被升级到FAIL
状态,这些哈希槽仍然可以被正常处理。PFAIL
状态表示我们当前不能和节点进行交互,但这种状态只是临时的错误状态。cluster_slots_fail
: 哈希槽状态是FAIL
的数量。如果值不是0,那么集群节点将无法提供查询服务,除非cluster-require-full-coverage
被设置为no
.cluster_known_nodes
: 集群中节点数量,包括处于握手
状态还没有成为集群正式成员的节点.cluster_size
: 至少包含一个哈希槽且能够提供服务的master节点数量.cluster_current_epoch
: 集群本地Current Epoch
变量的值。这个值在节点故障转移过程时有用,它总是递增和唯一的。cluster_my_epoch
: 当前正在使用的节点的Config Epoch
值. 这个是关联在本节点的版本值.cluster_stats_messages_sent
: 通过node-to-node二进制总线发送的消息数量.cluster_stats_messages_received
: 通过node-to-node二进制总线接收的消息数量.
2、cluster nodes
:提供了当前连接节点所属集群的配置信息,信息格式和Redis集群在磁盘上存储使用的序列化格式完全一样(在磁盘存储信息的结尾还存储了一些额外信息)
127.0.0.1:6381> cluster nodes
b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385@16385 slave 8dbe8b347410cf87d62933382b73693405535ba1 0 1651152474000 3 connected
8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381@16381 myself,master - 0 1651152472000 1 connected 0-5460
8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383@16383 master - 0 1651152474000 3 connected 10923-16383
c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384@16384 slave 60fa7e084483feca3af41f269de5a57b526c0ad7 0 1651152476585 2 connected
60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382@16382 master - 0 1651152475573 2 connected 5461-10922
4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386@16386 slave 8335b5349d781c11745ee129f5dbae370dbd3394 0 1651152474566 1 connected
127.0.0.1:6381>
主从关系图:
每行的组成:
id
: 节点ID,是一个40字节的随机字符串,这个值在节点启动的时候创建,并且永远不会改变(除非使用CLUSTER RESET HARD
命令)。ip:port
: 客户端与节点通信使用的地址.flags
: 逗号分割的标记位,可能的值有:myself
,master
,slave
,fail?
,fail
,handshake
,noaddr
,noflags
. 下一部分将详细介绍这些标记.myself
: 当前连接的节点.master
: 节点是master.slave
: 节点是slave.fail?
: 节点处于PFAIL
状态。 当前节点无法联系,但逻辑上是可达的 (非FAIL
状态).fail
: 节点处于FAIL
状态. 大部分节点都无法与其取得联系将会将改节点由PFAIL
状态升级至FAIL
状态。handshake
: 还未取得信任的节点,当前正在与其进行握手.noaddr
: 没有地址的节点(No address known for this node).noflags
: 连个标记都没有(No flags at all).
master
: 如果节点是slave,并且已知master节点,则这里列出master节点ID,否则的话这里列出”-“。ping-sent
: 最近一次发送ping的时间,这个时间是一个unix毫秒时间戳,0代表没有发送过.pong-recv
: 最近一次收到pong的时间,使用unix时间戳表示.config-epoch
: 节点的epoch值(or of the current master if the node is a slave)。每当节点发生失败切换时,都会创建一个新的,独特的,递增的epoch。如果多个节点竞争同一个哈希槽时,epoch值更高的节点会抢夺到。link-state
: node-to-node集群总线使用的链接的状态,我们使用这个链接与集群中其他节点进行通信.值可以是connected
和disconnected
.slot
: 哈希槽值或者一个哈希槽范围. 从第9个参数开始,后面最多可能有16384个 数(limit never reached)。代表当前节点可以提供服务的所有哈希槽值。如果只是一个值,那就是只有一个槽会被使用。如果是一个范围,这个值表示为起始槽-结束槽
,节点将处理包括起始槽和结束槽在内的所有哈希槽。
3、主从容错切换迁移案例
1、数据读写存储
1、启动6机构成的集群并通过exec进入
root@docker:/data# redis-cli -p 6381
127.0.0.1:6381> set k1 v1
(error) MOVED 12706 192.168.130.132:6383
127.0.0.1:6381> set k2 v2\\
OK
127.0.0.1:6381> set k3 v3
OK
127.0.0.1:6381> set k4 v4
(error) MOVED 8455 192.168.130.132:6382
127.0.0.1:6381>
显示k1和k4没有存储进去
(error) MOVED 12706 192.168.130.132:6383
:请转到6383的redis进行存储
2、防止路由失效加参数-c并新增两个key
root@docker:/data# redis-cli -p 6381 -c
127.0.0.1:6381> set k1 v1
-> Redirected to slot [12706] located at 192.168.130.132:6383
OK
192.168.130.132:6383> set k4 v4
-> Redirected to slot [8455] located at 192.168.130.132:6382
OK
192.168.130.132:6382> get k4
"v4"
192.168.130.132:6382>
Redirected to slot [8455] located at 192.168.130.132:6382
:重定向到6382
3、查看集群状态
redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 2 keys | 5461 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 5461 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 5462 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
slots: (0 slots) slave
replicates 8dbe8b347410cf87d62933382b73693405535ba1
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
slots: (0 slots) slave
replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
slots: (0 slots) slave
replicates 8335b5349d781c11745ee129f5dbae370dbd3394
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data#
2、容错切换迁移
1、主6381和从机切换,先停止主机6381
[root@docker ~]# docker stop redis-node-1
redis-node-1
[root@docker ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d6fc3ef2855b redis "docker-entrypoint.s…" 47 hours ago Up 47 hours redis-node-6
9c8868d69a50 redis "docker-entrypoint.s…" 47 hours ago Up 47 hours redis-node-5
7fbb5345951a redis "docker-entrypoint.s…" 47 hours ago Up 47 hours redis-node-4
d53b9d5af1ac redis "docker-entrypoint.s…" 47 hours ago Up 47 hours redis-node-3
fe0e430cb940 redis "docker-entrypoint.s…" 47 hours ago Up 47 hours redis-node-2
[root@docker ~]#
6381主机停了,对应的真实从机上位,也就是6号机变成了主机器
2、查看集群情况
127.0.0.1:6382> cluster nodes
4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386@16386 master - 0 1651158299456 7 connected 0-5460
c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384@16384 slave 60fa7e084483feca3af41f269de5a57b526c0ad7 0 1651158300472 2 connected
60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382@16382 myself,master - 0 1651158298000 2 connected 5461-10922
8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383@16383 master - 0 1651158298444 3 connected 10923-16383
8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381@16381 master,fail - 1651154146064 1651154140969 1 disconnected
b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385@16385 slave 8dbe8b347410cf87d62933382b73693405535ba1 0 1651158297000 3 connected
127.0.0.1:6382>
3、恢复3主3从
重新启动1号机之后,6号机还是主机,1号机器从之前的主机变成了从机
docker start redis-node-1
停掉6号机,再启动6号机
docker stop redis-node-6
docker start redis-node-6
查看集群状态
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 2 keys | 5461 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 5462 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 5461 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
slots: (0 slots) slave
replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
slots: (0 slots) slave
replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
slots: (0 slots) slave
replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data#
4、主从扩容案例
1、新建6387、6388两个节点+新建后启动+查看是否8节点
docker run -d --name redis-node-7 --net host --privileged=true -v /data/redis/share/redis-node-7:/data redis --cluster-enabled yes --appendonly yes --port 6387
docker run -d --name redis-node-8 --net host --privileged=true -v /data/redis/share/redis-node-8:/data redis --cluster-enabled yes --appendonly yes --port 6388
2、进入6387容器实例内部
docker exec -it redis-node-7 /bin/bash
3、将新增的6387节点(空槽号)作为master节点加入原集群
redis-cli --cluster add-node 192.168.130.132:6387 192.168.130.132:6381
root@docker:/data# redis-cli --cluster add-node 192.168.130.132:6387 192.168.130.132:6381
>>> Adding node 192.168.130.132:6387 to cluster 192.168.130.132:6381
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
slots: (0 slots) slave
replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
slots: (0 slots) slave
replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
slots: (0 slots) slave
replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.130.132:6387 to make it join the cluster.
[OK] New node added correctly.
root@docker:/data#
4、检查集群情况
redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 2 keys | 5461 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 5462 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 5461 slots | 1 slaves.
192.168.130.132:6387 (34b689b7...) -> 0 keys | 0 slots | 0 slaves.
很明显,6387没有槽号
5、重新分配槽号
redis-cli --cluster reshard 192.168.130.:6381
输入需要迁移的槽数量,此处我们输入4096。
目标节点ID,只能指定一个,因为我们需要迁移到6387中,因此下面输入6387的ID。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-p6Mph4vH-1662107599722)(images/1460000038771820.jpeg)]
之后输入源节点的ID,redis会从这些源节点中平均取出对应数量的槽,然后迁移到6385中。最后要输入done
表示结束。
最后输入yes即可。
6、检查集群情况
redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6387 (34b689b7...) -> 1 keys | 4096 slots | 0 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
slots: (0 slots) slave
replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
slots: (0 slots) slave
replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
M: 34b689b791d9945a0b761349f1bc7b64f0be876f 192.168.130.132:6387
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
slots: (0 slots) slave
replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data#
\\
M: 34b689b791d9945a0b761349f1bc7b64f0be876f 192.168.130.132:6387
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
为什么6387是3个新的区间,以前的还是连续? 重新分配成本太高,所以前3家各自匀出来一部分,从6381/6382/6383三个旧节点分别匀出1364个坑位给新节点6387
7、为主节点6387分配从节点6388
redis-cli --cluster add-node 192.168.130.132:6388 192.168.130.132:6387 --cluster-slave --cluster-master-id 34b689b791d9945a0b761349f1bc7b64f0be876f
8、检查集群情况
redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6387 (34b689b7...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
slots: (0 slots) slave
replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
slots: (0 slots) slave
replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
S: 4b4b4a8a4d50548e954b46e921ff8085ed555c39 192.168.130.132:6388
slots: (0 slots) slave
replicates 34b689b791d9945a0b761349f1bc7b64f0be876f
M: 34b689b791d9945a0b761349f1bc7b64f0be876f 192.168.130.132:6387
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
slots: (0 slots) slave
replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data#
6387存在一个子机器
5、主从缩容案例
1、检查集群情况1获得6388的节点ID
redis-cli --cluster check 192.168.130.132:6382
S: 4b4b4a8a4d50548e954b46e921ff8085ed555c39 192.168.130.132:6388
slots: (0 slots) slave
replicates 34b689b791d9945a0b761349f1bc7b64f0be876f
M: 34b689b791d9945a0b761349f1bc7b64f0be876f 192.168.130.132:6387
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
slots: (0 slots) slave
replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data#
节点ID为:4b4b4a8a4d50548e954b46e921ff8085ed555c39
2、从集群中将节点6388删除
redis-cli --cluster del-node 192.168.130.132:6388 4b4b4a8a4d50548e954b46e921ff8085ed555c39
oot@docker:/data# redis-cli --cluster del-node 192.168.130.132:6388 4b4b4a8a4d50548e954b46e921ff8085ed555c39
>>> Removing node 4b4b4a8a4d50548e954b46e921ff8085ed555c39 from cluster 192.168.130.132:6388
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
root@docker:/data#
检查集群情况
redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6387 (34b689b7...) -> 1 keys | 4096 slots | 0 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
slots: (0 slots) slave
replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
slots: (0 slots) slave
replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
M: 34b689b791d9945a0b761349f1bc7b64f0be876f 192.168.130.132:6387
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
slots: (0 slots) slave
replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data#
很明显,6387的从机器已经被没了,6388机器也已经被删除了,只剩下7台机器了。
3、将6387的槽号清空,重新分配,本例将清出来的槽号都给6381
redis-cli --cluster reshard 192.168.130.132:6381
这里我没有截到图,以阳哥的截图步骤
4、查看集群状态
redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 2 keys | 8192 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6387 (34b689b7...) -> 0 keys | 0 slots | 0 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
slots:[0-6826],[10923-12287] (8192 slots) master
1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
slots: (0 slots) slave
replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
slots: (0 slots) slave
replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
M: 34b689b791d9945a0b761349f1bc7b64f0be876f 192.168.130.132:6387
slots: (0 slots) master
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
slots: (0 slots) slave
replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
6381拥有8192个槽位
5、将6387删除
redis-cli --cluster del-node 192.168.130.132:6387 34b689b791d9945a0b761349f1bc7b64f0be876f
再次检查集群情况
redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 2 keys | 8192 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
slots:[0-6826],[10923-12287] (8192 slots) master
1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
slots: (0 slots) slave
replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
slots: (0 slots) slave
replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
slots: (0 slots) slave
replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data#
发现已经删除了
以上是关于接近3w详解Docker搭建Redis集群(主从容错主从扩容主从缩容)的主要内容,如果未能解决你的问题,请参考以下文章
这次一定要教会你搭建Redis集群和MySQL主从同步(非Docker)