consul分布式集群搭建&简单功能测试&故障恢复h
Posted exman
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了consul分布式集群搭建&简单功能测试&故障恢复h相关的知识,希望对你有一定的参考价值。
环境准备
五台机器:
操作系统
IP
Ubuntu 16.04.3 LTS x86_64
192.168.1.185
Ubuntu 16.10 x86_64
192.168.3.152
Ubuntu 12.04.2 LTS x86_64
192.168.1.235
Windows 10专业版
192.168.3.187
Ubuntu 16.04.2 LTS x86_64
192.168.3.150
Consul官网(https://www.consul.io/downloads.html)下载相应系统的consul可执行文件并放系统PATH环境变量目录内
集群启动
192.168.1.185启动consul
consulagent -server -bootstrap-expect 3 -data-dir /tmp/consul -node 192.168.1.185-datacenter huanan –ui
192.168.3.152启动consul
consulagent -server -bootstrap-expect 3 -data-dir /tmp/consul -node 192.168.3.152-datacenter huanan –ui
192.168.1.235启动consul
consulagent -server -bootstrap-expect 3 -data-dir /tmp/consul -node 192.168.1.235-datacenter huanan -ui
此时三台机器都会打印:
2017/09/07 14:54:26 [WARN] raft: no knownpeers, aborting election
2017/09/07 14:54:26 [ERR] agent: failed tosync remote state: No cluster leader
此时三台机器还未join,不能算是一个集群,三台机器上的consul均不能正常工作,因为leader未选出
三台机器组成consul集群
192.168.3.152加入192.168.1.185
chenchong@ubuntu-rebuild:$ consul join 192.168.1.185
Successfully joined cluster by contacting 1nodes.
192.168.1.235加入192.168.1.185
chenchong@user-SMBios: $ consul join 192.168.1.185
Successfully joined cluster by contacting 1nodes.
很快三台机器都会打印:
consul: New leader elected: 192.168.1.235
证明此时leader已经选出,集群可以正常工作
集群状态查看
chenchong@ubuntu-rebuild:$ consul operator raft list-peers
Node ID Address State Voter RaftProtocol
192.168.1.235 192.168.1.235:8300 192.168.1.235:8300 leader true 2
192.168.1.185 192.168.1.185:8300 192.168.1.185:8300 follower true 2
192.168.3.152 192.168.3.152:8300 192.168.3.152:8300 follower true 2
user@ubuntu:~$ consul operator raft list-peers
Node ID Address State Voter RaftProtocol
192.168.1.235 192.168.1.235:8300 192.168.1.235:8300 leader true 2
192.168.1.185 192.168.1.185:8300 192.168.1.185:8300 follower true 2
192.168.3.152 192.168.3.152:8300 192.168.3.152:8300 follower true 2
chenchong@user-SMBIOS:~$ consul operator raft list-peers
Node ID Address State Voter RaftProtocol
192.168.1.235 192.168.1.235:8300 192.168.1.235:8300 leader true 2
192.168.1.185 192.168.1.185:8300 192.168.1.185:8300 follower true 2
192.168.3.152 192.168.3.152:8300 192.168.3.152:8300 follower true 2
可以看出集群中192.168.1.235是leader,192.168.1.185和192.168.3.152都是follower
集群参数get/set测试
192.168.1.185set/get参数
chenchong@ubuntu-rebuild:~$ consul kv put key value
Success! Data written to: key
chenchong@ubuntu-rebuild:~$ consul kv get key
value
在192.168.1.185可以正常设置key的值为value,并能正常查回来
192.168.1.235get key的值
chenchong@user-SMBIOS:~$ consul kv get key
value
192.168.3.152get key的值
user@ubuntu:~$ consul kv get key
value
三台机器获取key的值均为value,如此可知key的值已经在集群中同步。
集群单机故障处理
单机consul进程故障发生
杀死三台机器中任意一台机器上的consul进程,这里杀死leader 192.168.1.235
在192.168.1.235机器上执行:
chenchong@user-SMBIOS:~$ killall consul
chenchong@user-SMBIOS:~$
此时看此台机器上的consul的log:
2017/09/07 15:30:48 [INFO] agent: Endpointsdown
2017/09/07 15:30:48 [INFO] Exit code: 1
Consul进程已经在这台机器上退出
此时都会192.168.1.185和192.168.3.152都会打印
[ERR] agent: failed to sync remote state:No cluster leader
[WARN] raft: Heartbeat timeout from"192.168.1.235:8300" reached, starting election
Leader丢失,剩下的两台机重新进行leader选举
后面两台机再打印:
[INFO] consul: New leader elected:192.168.3.152
新的leader已经选出,为192.168.3.152
此时再对192.168.1.185和192.168.3.152进行key的get/set操作:
chenchong@ubuntu-rebuild:~$ consul kv get key
value
user@ubuntu:~$ consul kv get key
value
均能正常get到key的值为value
修改key的值为value1,然后再get回来
chenchong@ubuntu-rebuild:~$ consul kv put key value1
Success! Data written to: key
chenchong@ubuntu-rebuild:~$ consul kv getkey
value1
user@ubuntu:~$ consul kv get key
value1
get回来的key的值均为value1,证明当集群实例数大于等于N/2+1=3/2+1=2的时候整个集群还是可以正常工作的。
此时查看集群状态:
chenchong@ubuntu-rebuild:~$ consul operatorraft list-peers
Node ID Address State Voter RaftProtocol
192.168.1.235 192.168.1.235:8300 192.168.1.235:8300 follower true 2
192.168.1.185 192.168.1.185:8300 192.168.1.185:8300 follower true 2
192.168.3.152 192.168.3.152:8300 192.168.3.152:8300 leader true 2
user@ubuntu:~$ consul operator raftlist-peers
Node ID Address State Voter RaftProtocol
192.168.1.235 192.168.1.235:8300 192.168.1.235:8300 follower true 2
192.168.1.185 192.168.1.185:8300 192.168.1.185:8300 follower true 2
192.168.3.152 192.168.3.152:8300 192.168.3.152:8300 leader true 2
表明此时192.168.3.152为leader,虽然192.168.1.235上的consul已经被kill,但集群记录的192.168.1.235的状态已经为follower,集群仍然等待192.168.1.235的重新加入,除非手动remove 192.168.1.235,192.168.1.235是不会在集群中消失的。
查看members状态:
user@ubuntu:~$ consul members
Node Address Status Type Build Protocol DC
192.168.1.185 192.168.1.185:8301 alive server 0.9.2 2 huanan
192.168.1.235 192.168.1.235:8301 failed server 0.9.2 2 huanan
192.168.3.152 192.168.3.152:8301 alive server 0.9.2 2 huanan
chenchong@ubuntu-rebuild:~$ consul members
Node Address Status Type Build Protocol DC
192.168.1.185 192.168.1.185:8301 alive server 0.9.2 2 huanan
192.168.1.235 192.168.1.235:8301 failed server 0.9.2 2 huanan
192.168.3.152 192.168.3.152:8301 alive server 0.9.2 2 huanan
从mebers状态中均可以看出192.168.1.235的Status为failed
因为192.168.1.235上的consul进程已经被kill,此时192.168.1.185和192.168.3.152的consul均会打印:
[ERR] raft: Failed to AppendEntries to{Voter 192.168.1.235:8300 192.168.1.235:8300}: dial tcp<nil>->192.168.1.235:8300: getsockopt: connection refused
192.168.1.185和192.168.3.152均在等待192.168.1.235的重新启动。
可以使用consul force-leave命令禁止重连。
在原机器上重启consul进程
在192.168.1.235机器上重启consul进程:
chenchong@user-SMBIOS:~/consul$ consul agent -server -bootstrap-expect 3-data-dir /tmp/consul -node 192.168.1.235 -datacenter huanan –ui
会看到如下log:
[INFO] raft: Node at 192.168.1.235:8300[Follower] entering Follower state (Leader: "")
[INFO] serf: Re-joined to previously knownnode: 192.168.3.152: 192.168.3.152:8301
192.168.1.235重新加入集群,变成follower
此时在192.168.1.235上getkey的值:
chenchong@user-SMBIOS:~$ consul kv get key
value1
表明192.168.1.235在重启后,key的值已经同步。此时192.168.1.185和192.168.3.152均没有再打印连接不上192.168.1.235的错误。
在新机器上启动consul进程并加入集群
当遇到机器级别的故障时(或机器迁移时),已经不能在旧的机器重启consul进程的时候,必须在一台新的机器中再启动consul进程并加入集群,以保证consul集群的高可用。
继续停掉192.168.1.235上的consul,在192.168.3.187机器上启动consul:
C:UsersThink>consul agent -server -bootstrap-expect 3 -data-dir /tmp/consul -node192.168.3.187 -datacenter huanan –ui
在集群中移除192.168.1.235:
user@ubuntu:~$ consul operator raft remove-peer -id=192.168.1.235:8300
Removed peer with id"192.168.1.235:8300"
此时查看集群peers状态:
user@ubuntu:~$ consul operator raft list-peers
Node ID Address State Voter RaftProtocol
192.168.1.185 192.168.1.185:8300 192.168.1.185:8300 follower true 2
192.168.3.152 192.168.3.152:8300 192.168.3.152:8300 leader true 2
将192.168.3.187上的consul加入集群:
C:UsersThink>consul join 192.168.3.152
Successfully joined cluster by contacting 1nodes.
此时这台机器上的consul进程会打印:
[ERR] agent: Coordinate update error: Nocluster leader
[INFO] raft: Node at 192.168.3.187:8300[Follower] entering Follower state (Leader: "")
表明192.168.3.187以follower的身份加入了集群
此时查看集群peers状态:
C:UsersThink>consul operator raftlist-peers
Node ID Address State Voter RaftProtocol
192.168.1.185 192.168.1.185:8300 192.168.1.185:8300 follower true 2
192.168.3.152 192.168.3.152:8300 192.168.3.152:8300 leader true 2
192.168.3.187 192.168.3.187:8300 192.168.3.187:8300 follower true 2
chenchong@ubuntu-rebuild:~$ consul operatorraft list-peers
Node ID Address State Voter RaftProtocol
192.168.1.185 192.168.1.185:8300 192.168.1.185:8300 follower true 2
192.168.3.152 192.168.3.152:8300 192.168.3.152:8300 leader true 2
192.168.3.187 192.168.3.187:8300 192.168.3.187:8300 follower true 2
user@ubuntu:~$ consul operator raftlist-peers
Node ID Address State Voter RaftProtocol
192.168.1.185 192.168.1.185:8300 192.168.1.185:8300 follower true 2
192.168.3.152 192.168.3.152:8300 192.168.3.152:8300 leader true 2
192.168.3.187 192.168.3.187:8300 192.168.3.187:8300 follower true 2
此时在192.168.3.187 get key的值:
C:UsersThink>consul kv get key
value1
表明192.168.3.187上consul进程已经加入集群和数据已经正常同步。
集群多机故障处理
当集群内的机器可用数小于N/2+1时,根据RAFT算法集群不可用。三台机器的consul集群中,当有两台机器挂掉的时候,集群完全不可用,此时需要恢复集群的话必须通过一系列的手动操作。
环境模拟
在集群启动中启动三台机器的集群,模拟192.168.3.152和192.168.1.235机器挂掉,在这两台机器上kill掉consul进程,只剩192.168.1.185这台机器的consul进程仍然在跑,此时在192.168.1.185机器上get/set参数都会出现如下错误:
Unexpected response code: 500 (No clusterleader)
集群恢复
参考:https://www.consul.io/docs/guides/outage.html
假设使用192.168.3.187和192.168.3.150来替换192.168.3.152和192.168.1.235加入集群。
(1)192.168.1.185增加raft/peers.json文件并重启consul
在192.168.1.185机器的consul的-data-dir目录的子目录raft下新增peers.json文件,内容为:
[
"192.168.1.185:8300",
"192.168.3.187:8300",
"192.168.3.150:8300"
]
除了最初的192.168.1.185外,新增192.168.3.187和192.168.3.150组成集群。
然后重启consul进程。
(2)在192.168.3.187和192.168.3.150上启动consul
consulagent -server -bootstrap-expect 3 -data-dir /tmp/consul -node 192.168.3.187-datacenter huanan –ui
consulagent -server -bootstrap-expect 3 -data-dir /tmp/consul -node 192.168.3.150-datacenter huanan –ui
(2)将192.168.3.187和192.168.3.150加入192.168.1.185
分别在192.168.3.187和192.168.3.150上执行:
consuljoin 192.168.1.185
此时分别在新集群的三台机器上查看集群状态:
C:UsersThink>consul operator raft list-peers
Node ID Address State Voter RaftProtocol
192.168.1.185 192.168.1.185:8300 192.168.1.185:8300 leader true 2
192.168.3.187 192.168.3.187:8300 192.168.3.187:8300 follower true 2
192.168.3.150 192.168.3.150:8300 192.168.3.150:8300 follower true 2
chenchong@ubuntu-rebuild:~$ consul operator raft list-peers
Node ID Address State Voter RaftProtocol
192.168.1.185 192.168.1.185:8300 192.168.1.185:8300 leader true 2
192.168.3.187 192.168.3.187:8300 192.168.3.187:8300 follower true 2
192.168.3.150 192.168.3.150:8300 192.168.3.150:8300 follower true 2
user@ubuntu:~$ consul operator raft list-peers
Node ID Address State Voter RaftProtocol
192.168.1.185 192.168.1.185:8300 192.168.1.185:8300 leader true 2
192.168.3.187 192.168.3.187:8300 192.168.3.187:8300 follower true 2
192.168.3.150 192.168.3.150:8300 192.168.3.150:8300 follower true 2
表明集群已经可用
查询一下key值:
user@ubuntu:~$ consul kv get key
value1
chenchong@ubuntu-rebuild:~$ consul kv get key
value1
C:UsersThink>consul kv get key
value1
表明数据已经同步,集群已经完全恢复。
————————————————
版权声明:本文为CSDN博主「大副」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/chenchong08/article/details/77885989
以上是关于consul分布式集群搭建&简单功能测试&故障恢复h的主要内容,如果未能解决你的问题,请参考以下文章