部署redis4.0-cluster
Posted Mr. Pan
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了部署redis4.0-cluster相关的知识,希望对你有一定的参考价值。
一、部署环境
1.关闭iptables(firewalld)或添加放行规则 2.关闭selinux
3.部署redis实例,参考:https://www.cnblogs.com/panwenbin-logs/p/10242027.html
二、部署Cluster
1.修改redis配置文件,开启cluster(六台机器上)
[root@redis-master ~]# grep "^[a-Z]" /etc/redis/redis_6379.conf #可以直接copy使用 bind 0.0.0.0 protected-mode yes port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize yes supervised no pidfile /var/run/redis_6379.pid loglevel notice logfile /var/log/redis/redis_6379.log databases 16 always-show-logo yes save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir /var/redis slave-serve-stale-data yes slave-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-disable-tcp-nodelay no slave-priority 100 lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no slave-lazy-flush no appendonly yes appendfilename "appendonly.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble no lua-time-limit 5000 cluster-enabled yes #开启cluster cluster-config-file /etc/redis-cluster/node-6379.conf #存放cluster信息 cluster-node-timeout 15000 #节点超时时间 slowlog-log-slower-than 10000 slowlog-max-len 128 latency-monitor-threshold 0 notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 aof-rewrite-incremental-fsync yes
[root@redis-master ~]# mkdir -p /etc/redis-cluster #创建cluster和log文件存放目录
[root@redis-master ~# mkdir -p /var/log/redis
2、安装ruby依赖(参考:https://www.cnblogs.com/ding2016/p/7903147.html)
#只需要安装一台服务器即可
[root@redis-master ~]#gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
[root@redis-master ~]#curl -sSL https://get.rvm.io | bash -s stable
[root@redis-master ~]#source /etc/profile.d/rvm.sh
[root@redis-master ~]#rvm install 2.2.10
[root@redis-master ~]#gem install redis
3、创建cluster(先使用六个节点创建)
[root@redis-master ~]#cd /usr/local/redis-4.0.12/src/
[root@redis-master ~]# systemctl start redis #启动redis服务器,清空数据
127.0.0.1:6379> flushall
OK
127.0.0.1:6379> cluster reset
OK
127.0.0.1:6379> exit
[root@redis-master ~]#./redis-trib.rb create --replicas 1 192.168.1.132:6379 192.168.1.133:6379 192.168.1.134:6379 192.168.1.135:6379 192.168.1.136:6379 192.168.1.137:6379 #创建集群
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters: #显示创建的master及地址
192.168.1.132:6379
192.168.1.133:6379
192.168.1.134:6379
Adding replica 192.168.1.136:6379 to 192.168.1.132:6379 #master对应的slave
Adding replica 192.168.1.137:6379 to 192.168.1.133:6379
Adding replica 192.168.1.135:6379 to 192.168.1.134:6379
M: a0fcce870bed5b4d8bc81467e39d55e5ff4be7e8 192.168.1.132:6379
slots:0-5460 (5461 slots) master
M: 7e07bd4d8656672b0da7add910bfdba49106def3 192.168.1.133:6379
slots:5461-10922 (5462 slots) master
M: 0b5efedf68451a48fa40270ff67e84c03faee56e 192.168.1.134:6379
slots:10923-16383 (5461 slots) master
S: 5b82091b9078ebfb265eedd1b65bf7222a143cd7 192.168.1.135:6379
replicates 0b5efedf68451a48fa40270ff67e84c03faee56e
S: e19150bf1bbc621dde7bd94d9efde3170f211591 192.168.1.136:6379
replicates a0fcce870bed5b4d8bc81467e39d55e5ff4be7e8
S: 7e9e756bcebe8225089dc8652a60c26522b653ff 192.168.1.137:6379
replicates 7e07bd4d8656672b0da7add910bfdba49106def3
Can I set the above configuration? (type \'yes\' to accept): yes #输入yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 192.168.1.132:6379)
M: a0fcce870bed5b4d8bc81467e39d55e5ff4be7e8 192.168.1.132:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 7e9e756bcebe8225089dc8652a60c26522b653ff 192.168.1.137:6379
slots: (0 slots) slave
replicates 7e07bd4d8656672b0da7add910bfdba49106def3
M: 7e07bd4d8656672b0da7add910bfdba49106def3 192.168.1.133:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 0b5efedf68451a48fa40270ff67e84c03faee56e 192.168.1.134:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 5b82091b9078ebfb265eedd1b65bf7222a143cd7 192.168.1.135:6379
slots: (0 slots) slave
replicates 0b5efedf68451a48fa40270ff67e84c03faee56e
S: e19150bf1bbc621dde7bd94d9efde3170f211591 192.168.1.136:6379
slots: (0 slots) slave
replicates a0fcce870bed5b4d8bc81467e39d55e5ff4be7e8
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
三、测试cluster相关功能
1、实验多master写入
[root@redis-master ~]# redis-cli -h 192.168.1.132 -p 6379 #连接至一台master上
192.168.1.132:6379> set k1 v1
(error) MOVED 12706 192.168.1.134:6379
192.168.1.132:6379> set k2 v2
OK
192.168.1.132:6379> set k3 v3
OK
192.168.1.132:6379> set k4 v4
(error) MOVED 8455 192.168.1.133:6379
192.168.1.132:6379> exit
[root@redis-master ~]# redis-cli -h 192.168.1.133 -p 6379 #连接到133上验证132的错误提示
192.168.1.133:6379> set k1 v1
(error) MOVED 12706 192.168.1.134:6379
192.168.1.133:6379> set k4 v4
OK
192.168.1.133:6379> get k2
(error) MOVED 449 192.168.1.132:6379
192.168.1.132:6379> exit
报错原因:
#在redis cluster写入数据的时候,其实可以将请求发送到任意一个master上去执行
但是,每个master都会计算这个key对应的CRC16值,然后对16384个hashslot取模,找到key对应的hashslot,找到hashslot对应的master
如果对应的master就在自己本地的话,set mykey1 v1,mykey1这个key对应的hashslot就在自己本地,那么自己就处理掉
但是如果计算出来的hashslot在其他master上,那么就会给客户端返回一个moved error,告诉你,你得到哪个master上去执行这条写入的命令
什么叫做多master的写入,就是每条数据只能存在于一个master上,不同的master负责存储不同的数据,分布式的数据存储
2、实验不同master各自的slave读取 -> 读写分离
[root@redis-master ~]# redis-cli -h 192.168.1.132 -p 6379
192.168.1.132:6379> get k2
"v2"
192.168.1.132:6379> info replication #查看自己的slave节点地址
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.1.136,port=6379,state=online,offset=7179,lag=0 #slave地址
192.168.1.132:6379> exit
[root@redis-master ~]# redis-cli -h 192.168.1.136 -p 6379 #连接到对应的slave上
192.168.1.136:6379> get k2
(error) MOVED 449 192.168.1.132:6379
192.168.1.136:6379> readonly #如果slave节点需要可以查询数据,需要设置readonly
OK
192.168.1.136:6379> get k2
"v2"
[root@redis-master ~]# redis-cli -c -h 192.168.1.132 -p 6379 #使用-c参数,可以让cluster自动进行命令重定向的操作
192.168.1.132:6379> set k1 va
-> Redirected to slot [12706] located at 192.168.1.134:6379
OK
#在redis cluster中,如果你要在slave读取数据,那么需要带上readonly指令,get k2
redis-cli -c启动,就会自动进行各种底层的重定向的操作
实验redis cluster的读写分离的时候,会发现有一定的限制性,默认情况下,redis cluster的核心的理念,主要是用slave做高可用的,每个master挂一两个slave,主要是做数据的热备,还有master故障时的主备切换,实现高可用的
redis cluster默认是不支持slave节点读或者写的,跟我们手动基于replication搭建的主从架构不一样的
slave node,readonly,get,这个时候才能在slave node进行读取,默认的话就是读和写都到master上去执行的
核心的思路,就是说,redis cluster的时候,就没有所谓的读写分离的概念了
读写分离,是为了什么,主要是因为要建立一主多从的架构,才能横向任意扩展slave node去支撑更大的读吞吐量
redis cluster的架构下,实际上本身master就是可以任意扩展的,你如果要支撑更大的读吞吐量,或者写吞吐量,或者数据量,都可以直接对master进行横向扩展就可以了
也可以实现支撑更高的读吞吐的效果
3、实验自动故障切换 -> 高可用性
[root@redis-master src]# redis-trib.rb check 192.168.1.132:6379 #检查集群的信息
[root@redis-master ~]# redis-cli -h 192.168.1.132 -p 6379
192.168.1.132:6379> info replication #查看对应的slaveIP
192.168.1.132:6379> exit
[root@redis-master src]# redis-trib.rb check 192.168.1.132:6379
>>> Performing Cluster Check (using node 192.168.1.132:6379)
M: a0fcce870bed5b4d8bc81467e39d55e5ff4be7e8 192.168.1.132:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 7e9e756bcebe8225089dc8652a60c26522b653ff 192.168.1.137:6379
slots: (0 slots) slave
replicates 7e07bd4d8656672b0da7add910bfdba49106def3
M: 7e07bd4d8656672b0da7add910bfdba49106def3 192.168.1.133:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 0b5efedf68451a48fa40270ff67e84c03faee56e 192.168.1.134:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 5b82091b9078ebfb265eedd1b65bf7222a143cd7 192.168.1.135:6379
slots: (0 slots) slave
replicates 0b5efedf68451a48fa40270ff67e84c03faee56e
S: e19150bf1bbc621dde7bd94d9efde3170f211591 192.168.1.136:6379
slots: (0 slots) slave
replicates a0fcce870bed5b4d8bc81467e39d55e5ff4be7e8
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@redis-master src]# ps aux|grep redis-server #查看redis服务的进程
[root@redis-master src]# kill -9 24778 #暴力杀敌redis进程
[root@redis-master src]# redis-trib.rb check 192.168.1.133:6379 #再次检查,发现cluster只有五台机器,3主2从,没有了132
>>> Performing Cluster Check (using node 192.168.1.133:6379)
M: 7e07bd4d8656672b0da7add910bfdba49106def3 192.168.1.133:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: e19150bf1bbc621dde7bd94d9efde3170f211591 192.168.1.136:6379
slots:0-5460 (5461 slots) master
0 additional replica(s)
S: 7e9e756bcebe8225089dc8652a60c26522b653ff 192.168.1.137:6379
slots: (0 slots) slave
replicates 7e07bd4d8656672b0da7add910bfdba49106def3
M: 0b5efedf68451a48fa40270ff67e84c03faee56e 192.168.1.134:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 5b82091b9078ebfb265eedd1b65bf7222a143cd7 192.168.1.135:6379
slots: (0 slots) slave
replicates 0b5efedf68451a48fa40270ff67e84c03faee56e
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@redis-master src]# systemctl start redis #启动132上的redis服务
[root@redis-master src]# redis-trib.rb check 192.168.1.133:6379 #在查看cluster的信息,发现变成了6台,并且132变成了slave
>>> Performing Cluster Check (using node 192.168.1.133:6379)
M: 7e07bd4d8656672b0da7add910bfdba49106def3 192.168.1.133:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: e19150bf1bbc621dde7bd94d9efde3170f211591 192.168.1.136:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: a0fcce870bed5b4d8bc81467e39d55e5ff4be7e8 192.168.1.132:6379
slots: (0 slots) slave
replicates e19150bf1bbc621dde7bd94d9efde3170f211591
S: 7e9e756bcebe8225089dc8652a60c26522b653ff 192.168.1.137:6379
slots: (0 slots) slave
replicates 7e07bd4d8656672b0da7add910bfdba49106def3
M: 0b5efedf68451a48fa40270ff67e84c03faee56e 192.168.1.134:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 5b82091b9078ebfb265eedd1b65bf7222a143cd7 192.168.1.135:6379
slots: (0 slots) slave
replicates 0b5efedf68451a48fa40270ff67e84c03faee56e
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@redis-master src]# redis-cli -c -h 192.168.1.136 -p 6379 #到136上直接获取数据,确定切换为master,并且salve为132
192.168.1.136:6379> get k2
"v2"
192.168.1.136:6379> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.1.132,port=6379,state=online,offset=9237,lag=0
master_replid:273f9539af4494673bfbaf71bb71e94e952c6865
master_replid2:a4fec1a5828ac30ba00db05d81a469eee725032b
master_repl_offset:9237
second_repl_offset:8818
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:9237
四、redis cluster通过master水平扩容
1、加入138到cluster
[root@redis-master src]# redis-trib.rb add-node 192.168.1.138:6379 192.168.1.132:6379 #将138加入集群,从132获取cluster元数据信息
>>> Adding node 192.168.1.138:6379 to cluster 192.168.1.132:6379
>>> Performing Cluster Check (using node 192.168.1.132:6379)
S: a0fcce870bed5b4d8bc81467e39d55e5ff4be7e8 192.168.1.132:6379
slots: (0 slots) slave
replicates e19150bf1bbc621dde7bd94d9efde3170f211591
S: 5b82091b9078ebfb265eedd1b65bf7222a143cd7 192.168.1.135:6379
slots: (0 slots) slave
replicates 0b5efedf68451a48fa40270ff67e84c03faee56e
M: e19150bf1bbc621dde7bd94d9efde3170f211591 192.168.1.136:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 7e9e756bcebe8225089dc8652a60c26522b653ff 192.168.1.137:6379
slots: (0 slots) slave
replicates 7e07bd4d8656672b0da7add910bfdba49106def3
M: 7e07bd4d8656672b0da7add910bfdba49106def3 192.168.1.133:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 0b5efedf68451a48fa40270ff67e84c03faee56e 192.168.1.134:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.1.138:6379 to make it join the cluster.
[OK] New node added correctly.
[root@redis-master src]# redis-trib.rb check 192.168.1.132:6379 #检查cluster信息
>>> Performing Cluster Check (using node 192.168.1.132:6379)
S: a0fcce870bed5b4d8bc81467e39d55e5ff4be7e8 192.168.1.132:6379
slots: (0 slots) slave
replicates e19150bf1bbc621dde7bd94d9efde3170f211591
M: 103de424c55593724ef254221a66bc68ba48ef62 192.168.1.138:6379 #加入成功,但是发现138并没有hash slots,这样无法写入数据
slots: (0 slots) master
0 additional replica(s)
S: 5b82091b9078ebfb265eedd1b65bf7222a143cd7 192.168.1.135:6379
slots: (0 slots) slave
replicates 0b5efedf68451a48fa40270ff67e84c03faee56e
M: e19150bf1bbc621dde7bd94d9efde3170f211591 192.168.1.136:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 7e9e756bcebe8225089dc8652a60c26522b653ff 192.168.1.137:6379
slots: (0 slots) slave
replicates 7e07bd4d8656672b0da7add910bfdba49106def3
M: 7e07bd4d8656672b0da7add910bfdba49106def3 192.168.1.133:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 0b5efedf68451a48fa40270ff67e84c03faee56e 192.168.1.134:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
2、reshard一些数据到138上(这一步不知道实验的时候为什么报错,没有解决)
[root@redis-master src]# redis-trib.rb reshard 192.168.1.132:6379 #使用reshard命令
>>> Performing Cluster Check (using node 192.168.1.132:6379)
M: 5c4ab42da4058339aa5ae10596f555702545d91a 192.168.1.132:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: 9eed259aaee23f53177efc854541c3df4d7d08a8 192.168.1.138:6379
slots: (0 slots) master
0 additional replica(s)
S: c84b6383c8970a8bf4dd04893b8bad89104b1656 192.168.1.137:6379
slots: (0 slots) slave
replicates f0cab6c760f6e4c9e5dc5ea227b165c7efc33163
S: bed6c13d89f5c6f6baecf45a70ff15c6c12b80c2 192.168.1.136:6379
slots: (0 slots) slave
replicates 5c4ab42da4058339aa5ae10596f555702545d91a
M: f0cab6c760f6e4c9e5dc5ea227b165c7efc33163 192.168.1.133:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 0c0739812a07c83e3d6d8bda4c9ccfd9f1f3ae14 192.168.1.135:6379
slots: (0 slots) slave
replicates 4000b2e02bc64d8ae327393373c9358f802a1dac
M: 4000b2e02bc64d8ae327393373c9358f802a1dac 192.168.1.134:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096 #相应移动的slots的个数,16384/4=4096 (Totel/master mumber)
What is the receiving node ID? 9eed259aaee23f53177efc854541c3df4d7d08a8 #移动到哪个master节点
Please enter all the source node IDs.
Type \'all\' to use all the nodes as source nodes for the hash slots.
Type \'done\' once you entered all the source nodes IDs.
Source node #1:5c4ab42da4058339aa5ae10596f555702545d91a #数据源节点
Source node #2:f0cab6c760f6e4c9e5dc5ea227b165c7efc33163
Source node #3:4000b2e02bc64d8ae327393373c9358f802a1dac
Source node #4:done #down表示结束
Ready to move 4096 slots.
Source nodes:
.......
[root@redis-master src]# redis-trib.rb check 192.168.1.132:6379 以上是关于部署redis4.0-cluster的主要内容,如果未能解决你的问题,请参考以下文章
>>> Performing Cluster Check (using node 192.168.1.132:6379)
M: 5c4ab42da4058339aa5ae10596f555702545d91a 192.168.1.132:6379 #slots都为4096
slots:1365-5460 (4096 slots) master
1 additional replica(s)
M: 9eed259aaee23f53177efc854541c3df4d7d08a8 192.168.1.138:6379
slots:0-1364,5461-6826,10923-12287 (4096 slots) master
0 additional replica(s)
S: c84b6383c8970a8bf4dd04893b8bad89104b1656 192.168.