consui集群配置
Posted exman
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了consui集群配置相关的知识,希望对你有一定的参考价值。
consul集群搭建:
一、软件安装
Linux 环境下载zip包然后直接解压,然后把解压的文mv consul /bin
检验安装是否成功,查看版本
[root@node1 ~]consul -v
Consul v1.1.0
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
Consul 软件下载地址:https://www.consul.io/downloads.html
默然仓库地址:https://yumrepo.b0.upaiyun.com/
查看版本:
[root@node1 raft]# consul -v
Consul v1.1.0
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
[root@node1 raft]#
二、集群配置:
集群配置思路需要考虑问题:
搭建consul,需要先理解conusl的server leader定制选举
1、通过配置文件中设置"bootstrap_expect":的
2、先手动自定主,然后其他两个节点join进来
方式1:命令启动的方式
启动方式:/usr/bin/consul agent -config-file=/etc/consul/consul.json
第一个节点:
./consul agent -server -bootstrap-expect 2 -data-dir /opt/consul-data -node=node1 -bind=ip1 -ui -client=0.0.0.0 &
其他节点:
./consul agent -server -bootstrap-expect 2 -data-dir /opt/consul-data -node=node2 -bind=ip2 -join= ip1 -ui -client=0.0.0.0 &
./consul agent -server -bootstrap-expect 2 -data-dir /opt/consul-data -node=node3 -bind=ip3 -join= ip1 -ui -client=0.0.0.0 &
方式2:配置文件的方式:
[root@node1 ~]# cat /etc/consul/consul.json
{
"data_dir": "/opt/consul-data",
"log_level": "INFO",
"server": true,
"bootstrap_expect": 1,
"retry_join": ["172.16.36.67","172.16.36.50"],
"retry_interval": "3s",
"rejoin_after_leave": true,
"domain": "ycgwl.com",
"client_addr":"0.0.0.0",
"ui": true,
"datacenter": "dc1"
}
注意:leader是"bootstrap_expect": 1,
Flower:是“"bootstrap_expect": 2,
查看集群节点是够已经正常启动:
[root@node1 ~]# consul members
Node Address Status Type Build Protocol DC Segment
node1 172.16.36.56:8301 alive server 1.1.0 2 dc1 <all>
node2 172.16.36.67:8301 alive server 1.1.0 2 dc1 <all>
node3 172.16.36.50:8301 alive server 1.1.0 2 dc1 <all>
命令的方式启动leader:
./consul agent -server –bootstrap-expect 1 –data-dir /tmp/consul -node node1 –bind=10.10.49.193 –ui –client 0.0.0.0 -dc dc1 &
i. server: 以server身份启动。
ii. bootstrap-expect:集群要求的最少server数量,当低于这个数量,集群即失效。经测试,低于这个数量也不影响访问
iii. data-dir:data存放的目录,更多信息请参阅consul数据同步机制
iv. node:节点id,在同一集群不能重复。
v. bind:监听的ip地址。
vi. client 客户端的ip地址
vii. & :在后台运行,此为linux脚本语法
viii. ui:启动webui,端口8500
viiii.:-config-dir 指定服务配置文件的目录(这个目录下的所有.json文件,作为服务配置文件读取)
访问ip:8500/ui,出现如下页面,则启动成功
Consul agent-ui
http://172.16.37.39:8500/ui/
命令启动:
nohup /usr/bin/consul agent -config-file=/etc/consul/consul.json &
定制consul的服务端system启动方式:
root@node1]# cat /usr/lib/systemd/system/consul.service
[Unit]
Description=consul
After=network
[Service]
Type=simple
PIDFile=/usr/local/consul/consul.pid
ExecStart=/usr/bin/consul agent -config-file=/etc/consul/consul.json
ExecReload=/usr/bin/consul reload
[Install]
WantedBy=multi-user.target
定制consul的客户端system启动方式:
https://www.linuxidc.com/Linux/2014-11/109232p2.htm
https://www.linuxidc.com/Linux/2014-11/109232.htm
[root@node1 consul]# cat consul.service
[Unit]
Description=consul
After=network
[Service]
Type=simple
PIDFile=/opt/consul-data/consul.pid
ExecStart=/opt/consul agent -join 172.16.37.39 -data-dir=/opt/consul-data -datacenter=zongbu
ExecReload=/opt/consul reload
[Install]
WantedBy=multi-user.target
服务管理:
systemctl enable consul.service
systemctl status consul.service
systemctl enable consul
service consul start
service consul sttop
service consul stop
server agent进程:
[root@node2 opt]# netstat -tulnp | grep consul
tcp 0 0 172.16.36.67:8300 0.0.0.0: LISTEN 23384/consul
tcp 0 0 172.16.36.67:8301 0.0.0.0: LISTEN 23384/consul
tcp 0 0 172.16.36.67:8302 0.0.0.0: LISTEN 23384/consul
tcp6 0 0 :::8500 ::: LISTEN 23384/consul
tcp6 0 0 :::8600 ::: LISTEN 23384/consul
udp 0 0 172.16.36.67:8301 0.0.0.0: 23384/consul
udp 0 0 172.16.36.67:8302 0.0.0.0: 23384/consul
udp6 0 0 :::8600 ::: 23384/consul
clinet agent 进程:
[root@etcd ~]# netstat -tulnp | grep consul
tcp 0 0 127.0.0.1:8500 0.0.0.0: LISTEN 29980/./consul
tcp 0 0 127.0.0.1:8600 0.0.0.0: LISTEN 29980/./consul
tcp6 0 0 :::8301 ::: LISTEN 29980/./consul
udp 0 0 127.0.0.1:8600 0.0.0.0: 29980/./consul
udp6 0 0 :::8301 :::* 29980/./consul
[root@etcd ~]#
各个通信端口作用:
• 8500,客户端http api接口,web页面管理
• 8600,客户端DNS服务端口
• 8400,客户端RPC通信端口
• 8300,集群server RPC通信接口
• 8301,集群DC内部通信接口
• 8302,集群DC之间通信接口
查看leader角色:
[root@node1 raft]# consul operator raft list-peers
Node ID Address State Voter RaftProtocol
node1 016a92e7-b9ff-1dd7-f758-38fb3b2a9088 172.16.37.39:8300 leader true 3
列出所有数据中心:
[root@node1 ~]# consul catalog datacenters -http-addr=IP:8500
zongbu
[root@node1 ~]#
列出所有节点:
[root@node2 system]# consul catalog nodes
Node ID Address DC
node1 0ffe6841 172.16.36.56 dc1
node2 a6b9b6d7 172.16.36.67 dc1
node3 23eb28a2 172.16.36.50 dc1
列出所有服务:
[root@node1 opt]# ./consul catalog services -http-addr=IP:8500
Consul
列出数据中心:
[root@node2 system]# consul catalog datacenters
dc1
zongbu
命令查看consul节点:
curl 172.16.37.35:8500/v1/catalog/nodes
把client节点加到server:
把172.16.37.10节点加入到consul
./consul join -wan clientIP -http-addr=http://leaderIP:8500
遇到问题1、网卡有多个,网卡冲突
报错解决:Multiple private IPv4 addresses found. Please configure one with ‘bind’ and/or ‘advertise’.
[root@etcd ~]#
[root@node1 ~]# /usr/bin/consul agent -config-file=/etc/consul/consul.json
==> Multiple private IPv4 addresses found. Please configure one with ‘bind‘ and/or ‘advertise‘.
[root@node1 ~]# ps -ef | grep flanne
root 664 1 0 09:57 ? 00:00:00 /usr/bin/flanneld -etcd-endpoints=http://172.16.36.63:2379 -etcd-prefix=/k8s/network --iface=eth0
root 4721 710 0 11:08 pts/0 00:00:00 grep --color=auto flanne
https://blog.csdn.net/yelllowcong/article/details/79602151
遇到问题2、system服务启动不了:consul软件路径写错了
[root@node1]# systemctl enable consul.service
Failed to execute operation: Bad message
[root@node1]# sudo systemctl enable consul.service
Failed to execute operation: Bad message
[root@node1]# systemctl -f enable consul.service
Failed to execute operation: Bad message
[root@node1]# ls /usr/lib/systemd/system/consul.service
/usr/lib/systemd/system/consul.service
经过检查,结果发现:opt/consul-agent/consul路径写错了少了consul-agent这级目录
[root@etcd ~]# ls /usr/lib/systemd/system/consul.service
/usr/lib/systemd/system/consul.service
[root@etcd ~]# cat /usr/lib/systemd/system/consul.service
[Unit]
Description=consul
After=network
[Service]
Type=simple
PIDFile=/opt/consul-data/consul.pid
ExecStart=/opt/consul-agent/consul agent -join 172.16.36.50 -data-dir=/opt/consul-data
ExecReload=/opt/consul reload
[Install]
WantedBy=multi-user.target
后续:
• consul 采用DNS或者http获取服务信息,没有主动通知,需要自己轮训获取
• consul之间如何做服务健康检查?
• https://www.cnblogs.com/wangzhisdu/p/7762715.html
• dig解析:linux下提供nslookup,dig命令的软件就是 bind-utils
• consul的ACL控制:
• http://www.xiaomastack.com/2016/06/11/cousnl-acl/
博客参考:
• 博客参考地址:http://www.cnblogs.com/Summer7C/p/7327109.html
• https://blog.csdn.net/viewcode/article/details/45915179
• https://blog.csdn.net/buxiaoxia/article/details/69788114
• https://blog.coding.net/blog/intro-consul?type=hot
• https://segmentfault.com/a/1190000008471221?from=timeline
RPC协议:
https://blog.csdn.net/wangyunpeng0319/article/details/78651998
consul相关资源
• 可执行程序下载地址: https://www.consul.io/downloa...
• 官方说明文档: https://www.consul.io/docs/in...
• api说明文档: https://godoc.org/github.com/...
• golang api代码位置:github.com/hashicorp/consul/api
• Consul官方文档:https://www.consul.io/docs/install/index.html
以上是关于consui集群配置的主要内容,如果未能解决你的问题,请参考以下文章