CentOS7 haproxy+keepalived实现高可用集群搭建

Posted liyupi

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了CentOS7 haproxy+keepalived实现高可用集群搭建相关的知识,希望对你有一定的参考价值。

CentOS7 haproxy+keepalived实现高可用集群搭建

1.1 本地操作系统环境

CentOS7 64位

[root@lb03 ~]# cat /etc/centos-release

CentOS Linux release 7.5.1804 (Core)

       

[root@lb03 ~]# uname -r

3.10.0-862.el7.x86_64

[root@lb03 ~]# rpm -qa haproxy

haproxy-1.5.18-7.el7.x86_64

[root@lb03 ~]# nginx -V

nginx version: nginx/1.12.2

built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC)

built with OpenSSL 1.0.2k-fips  26 Jan 2017

TLS SNI support enabled

 

  后端负载主机:192.168.25.71   192.168.25.72   两台节点上安装rabbitmq服务

        Haproxy 也是安装在 192.168.25.71 和 192.168.25.72 上,用于对外提供 RabbitMQ 均衡

        Keepalived实现haproxy的主备,高可用(避免单点问题),192.168.25.71(主)192.168.25.72(备),虚拟地址VIP 192.168.166.29

 rabbitmq配置可以参看

http://blog.csdn.net/sj349781478/article/details/78841382

http://blog.csdn.net/sj349781478/article/details/78845852

第2章 HAproxy安装

2.1 HAproxy简介

1)HAProxy提供高可用性、负载均衡以及基于TCP和HTTP应用的代理,支持虚拟主机,它是免费、快速并且可靠的一种解决方案。

2)HAProxy特别适用于那些负载特大的web站点,这些站点通常又需要会话保持或七层处理。

3)HAProxy运行在当前的硬件上,完全可以支持数以万计的并发连接。并且它的运行模式使得它可以很简单安全的整合进您当前的架构中, 同时可以保护你的web服务器不被暴露到网络上。

4)HAProxy实现了一种事件驱动, 单一进程模型,此模型支持非常大的并发连接数。多进程或多线程模型受内存限制 、系统调度器限制以及无处不在的锁限制,很少能处理数千并发连接。事件驱动模型因为在有更好的资源和时间管理的用户空间(User-Space) 实现所有这些任务,所以没有这些问题。此模型的弊端是,在多核系统上,这些程序通常扩展性较差。这就是为什么他们必须进行优化以 使每个CPU时间片(Cycle)做更多的工作。



2.2 HAproxy安装配置

2.2.1 Haproxy代理rabbitmq 配置文件

[root@lb01 haproxy]# cat haproxy.cfg

###########全局配置#########

global

#    log /dev/log    local0

#    log /dev/log    local1 notice

    log 127.0.0.1 local0 info

    chroot /var/lib/haproxy     # 改变当前工作目录

    stats socket /run/haproxy/admin.sock mode 660 level admin   # 创建监控所用的套接字目录

    pidfile  /var/run/haproxy.pid   # haproxy的pid存放路径,启动进程的用户必须有权限访问此文件

    maxconn  4000                   # 最大连接数,默认4000

    user   haproxy                  # 默认用户

    group   haproxy                 # 默认用户组

    daemon                          # 创建1个进程进入deamon模式运行。此参数要求将运行模式设置为"daemon

 

    # Default SSL material locations

    ca-base /etc/ssl/certs

    crt-base /etc/ssl/private

 

    # Default ciphers to use on SSL-enabled listening sockets.

    # For more information, see ciphers(1SSL). This list is from:

    #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/

    ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

    ssl-default-bind-options no-sslv3

 

###########默认配置#########

defaults

    log global

    mode    http                                # 默认的模式mode tcp|http|health ,tcp是4层,http是7层,health只会返回OK

    option  httplog                             # 采用http日志格式

    option  dontlognull                         # 启用该项,日志中将不会记录空连接。所谓空连接就是在上游的负载均衡器

                                                # 或者监控系统为了探测该 服务是否存活可用时,需要定期的连接或者获取某

                                                # 一固定的组件或页面,或者探测扫描端口是否在监听或开放等动作被称为空连接;

                                                # 官方文档中标注,如果该服务上游没有其他的负载均衡器的话,建议不要使用

                                                # 该参数,因为互联网上的恶意扫描或其他动作就不会被记录下来

    timeout connect 5000                    # 连接超时时间

    timeout client  50000                   # 客户端连接超时时间

    timeout server  50000                   # 服务器端连接超时时间

    option  httpclose       # 每次请求完毕后主动关闭http通道

    option  httplog         # 日志类别http日志格式

    #option  forwardfor      # 如果后端服务器需要获得客户端真实ip需要配置的参数,可以从Http Header中获得客户端ip 

    option  redispatch      # serverId对应的服务器挂掉后,强制定向到其他健康的服务器

    timeout connect 10000   # default 10 second timeout if a backend is not found

    maxconn     60000       # 最大连接数

    retries     3           # 3次连接失败就认为服务不可用,也可以通过后面设置

#    errorfile 400 /etc/haproxy/errors/400.http

#    errorfile 403 /etc/haproxy/errors/403.http

#    errorfile 408 /etc/haproxy/errors/408.http

#    errorfile 500 /etc/haproxy/errors/500.http

#    errorfile 502 /etc/haproxy/errors/502.http

#    errorfile 503 /etc/haproxy/errors/503.http

#    errorfile 504 /etc/haproxy/errors/504.http

####################################################################

listen http_front

        bind 0.0.0.0:1080 

        stats refresh 30s

        stats uri /haproxy?stats  

        stats realm Haproxy Manager

        stats auth admin:admin   

        #stats hide-version      

 

#####################我把RabbitMQ的管理界面也放在HAProxy后面了###############################

listen rabbitmq_admin

    bind 0.0.0.0:8004

    server node1 192.168.25.73:15672

    server node2 192.168.25.74:15672

    server node3 192.168.25.75:15672

####################################################################

listen rabbitmq_cluster

    bind 0.0.0.0:80

    option tcplog

    mode tcp

    timeout client  3h

    timeout server  3h

    option          clitcpka

    balance roundrobin     

    #balance url_param userid

    #balance url_param session_id check_post 64

    #balance hdr(User-Agent)

    #balance hdr(host)

    #balance hdr(Host) use_domain_only

    #balance rdp-cookie

    #balance leastconn

    #balance source //ip

    server   node1 192.168.25.73:15672 check inter 5s rise 2 fall 3  

    server   node2 192.168.25.74:15672 check inter 5s rise 2 fall 3

    server   node3 192.168.25.75:15672 check inter 5s rise 2 fall 3

2.2.2 Haproxy代理nginx

1、安装haproxy
# yum install haproxy -y

 2、编辑配置文件

 

[root@lb02 ~]# grep -Ev ‘^$|^#‘ /etc/haproxy/haproxy.cfg

 

global

    # to have these messages end up in /var/log/haproxy.log you will

    # need to:

    #

    # 1) configure syslog to accept network log events.  This is done

    #    by adding the ‘-r‘ option to the SYSLOGD_OPTIONS in

    #    /etc/sysconfig/syslog

    #

    # 2) configure local2 events to go to the /var/log/haproxy.log

    #   file. A line like the following can be added to

    #   /etc/sysconfig/syslog

    #

    #    local2.*                       /var/log/haproxy.log

    #

    log 127.0.0.1 local0 info

 

    #chroot      /var/lib/haproxy

    #pidfile     /var/run/haproxy.pid

    maxconn     4000

    user        haproxy

    group       haproxy

    daemon

 

    # turn on stats unix socket

    #stats socket /var/lib/haproxy/stats

 

defaults

    mode                    http

    log                     global

    option                  httplog

    option                  dontlognull

    option http-server-close

    option forwardfor       except 127.0.0.0/8

    option                  redispatch

    retries                 3

    timeout http-request    10s

    timeout queue           1m

    timeout connect         10s

    timeout client          1m

    timeout server          1m

    timeout http-keep-alive 10s

    timeout check           10s

    maxconn                 3000

 

frontend main

    bind *:80

    acl url_static       path_beg       -i /static /images /javascript /stylesheets

    acl url_static       path_end       -i .jpg .gif .png .css .js

 

    use_backend static          if url_static

    default_backend             nginx

 

backend static

    balance     roundrobin

    server      static 127.0.0.1:80 check

 

backend nginx

    balance     roundrobin

    server  nginx1 192.168.25.73:80 check inter 2000 fall 3 weight 30

    server  nginx2 192.168.25.74:80 check inter 2000 fall 3 weight 30

    server  nginx3 192.168.25.75:80 check inter 2000 fall 3 weight 30

3、启动
# haproxy -f /etc/haproxy/haproxy.cfg
4、重启动
# service haproxy restart

5、查看haproxy是否已经启动



6、haproxy启用监控页面

编辑haproxy.cfg  加上下面参数  

listen admin_stats

        stats   enable

        bind    *:9090    //监听的ip端口号

        mode    http    //开关

        option  httplog

        log     global

        maxconn 10

        stats   refresh 30s   //统计页面自动刷新时间

        stats   uri /admin    //访问的uri   ip:8080/admin

        stats   realm haproxy

        stats   auth admin:Redhat  //认证用户名和密码

        stats   hide-version   //隐藏HAProxy的版本号

        stats   admin if TRUE   //管理界面,如果认证成功了,可通过webui管理节点

保存退出后

重起service haproxy restart

然后访问 http://192.168.25.72:9090/admin          用户名:admin 密码:Redhat

 

 



---------------------------------------------------------------------------------------------

参数举例说明:【/usr/local/haproxy/haproxy.cfg】

###########全局配置#########

global

  log 127.0.0.1 local0 #[日志输出配置,所有日志都记录在本机,通过local0输出]

  log 127.0.0.1 local1 notice #定义haproxy 日志级别[error warringinfo debug]

  daemon #以后台形式运行harpoxy

  nbproc 1 #设置进程数量

  maxconn 4096 #默认最大连接数,需考虑ulimit-n限制

  #user haproxy #运行haproxy的用户

  #group haproxy #运行haproxy的用户所在的组

  #pidfile /var/run/haproxy.pid #haproxy 进程PID文件

  #ulimit-n 819200 #ulimit 的数量限制

  #chroot /usr/share/haproxy #chroot运行路径

  #debug #haproxy 调试级别,建议只在开启单进程的时候调试

  #quiet

 

########默认配置############

defaults

  log global

  mode http #默认的模式mode tcp|http|health ,tcp是4层,http是7层,health只会返回OK

  option httplog #日志类别,采用httplog

  option dontlognull #不记录健康检查日志信息

  retries 2 #两次连接失败就认为是服务器不可用,也可以通过后面设置

  #option forwardfor #如果后端服务器需要获得客户端真实ip需要配置的参数,可以从Http Header中获得客户端ip

  option httpclose #每次请求完毕后主动关闭http通道,haproxy不支持keep-alive,只能模拟这种模式的实现

  #option redispatch #当serverId对应的服务器挂掉后,强制定向到其他健康的服务器,以后将不支持

  option abortonclose #当服务器负载很高的时候,自动结束掉当前队列处理比较久的链接

  maxconn 4096 #默认的最大连接数

  timeout connect 5000ms #连接超时

  timeout client 30000ms #客户端超时

  timeout server 30000ms #服务器超时

  #timeout check 2000 #心跳检测超时

  #timeout http-keep-alive10s #默认持久连接超时时间

  #timeout http-request 10s #默认http请求超时时间

  #timeout queue 1m #默认队列超时时间

  balance roundrobin #设置默认负载均衡方式,轮询方式

  #balance source #设置默认负载均衡方式,类似于nginx的ip_hash

  #balnace leastconn #设置默认负载均衡方式,最小连接数

 

########统计页面配置########

listen stats

  bind 0.0.0.0:1080 #设置Frontend和Backend的组合体,监控组的名称,按需要自定义名称

  mode http #http的7层模式

  option httplog #采用http日志格式

  #log 127.0.0.1 local0 err #错误日志记录

  maxconn 10 #默认的最大连接数

  stats refresh 30s #统计页面自动刷新时间

  stats uri /stats #统计页面url

  stats realm XingCloud\ Haproxy #统计页面密码框上提示文本

  stats auth admin:admin #设置监控页面的用户和密码:admin,可以设置多个用户名

  stats auth Frank:Frank #设置监控页面的用户和密码:Frank

  stats hide-version #隐藏统计页面上HAProxy的版本信息

  stats admin if TRUE #设置手工启动/禁用,后端服务器(haproxy-1.4.9以后版本)

 

########设置haproxy 错误页面#####

#errorfile 403 /home/haproxy/haproxy/errorfiles/403.http

#errorfile 500 /home/haproxy/haproxy/errorfiles/500.http

#errorfile 502 /home/haproxy/haproxy/errorfiles/502.http

#errorfile 503 /home/haproxy/haproxy/errorfiles/503.http

#errorfile 504 /home/haproxy/haproxy/errorfiles/504.http

 

########frontend前端配置##############

frontend main

  bind *:80 #这里建议使用bind *:80的方式,要不然做集群高可用的时候有问题,vip切换到其他机器就不能访问了。

  acl web hdr(host) -i www.abc.com  #acl后面是规则名称,-i为忽略大小写,后面跟的是要访问的域名,如果访问www.abc.com这个域名,就触发web规则,。

  acl img hdr(host) -i img.abc.com  #如果访问img.abc.com这个域名,就触发img规则。

  use_backend webserver if web   #如果上面定义的web规则被触发,即访问www.abc.com,就将请求分发到webserver这个作用域。

  use_backend imgserver if img   #如果上面定义的img规则被触发,即访问img.abc.com,就将请求分发到imgserver这个作用域。

  default_backend dynamic #不满足则响应backend的默认页面

 

########backend后端配置##############

backend webserver #webserver作用域

  mode http

  balance roundrobin #balance roundrobin 负载轮询,balance source 保存session值,支持static-rr,leastconn,first,uri等参数

  option httpchk /index.html HTTP/1.0 #健康检查, 检测文件,如果分发到后台index.html访问不到就不再分发给它

  server web1 10.16.0.9:8085 cookie 1 weight 5 check inter 2000 rise 2 fall 3

  server web2 10.16.0.10:8085 cookie 2 weight 3 check inter 2000 rise 2 fall 3

  #cookie 1表示serverid为1,check inter 1500 是检测心跳频率

  #rise 2是2次正确认为服务器可用,fall 3是3次失败认为服务器不可用,weight代表权重

 

backend imgserver

  mode http

  option httpchk /index.php

  balance roundrobin

  server img01 192.168.137.101:80 check inter 2000 fall 3

  server img02 192.168.137.102:80 check inter 2000 fall 3

 

backend dynamic

  balance roundrobin

  server test1 192.168.1.23:80 check maxconn 2000

  server test2 192.168.1.24:80 check maxconn 2000

 

 

listen tcptest

  bind 0.0.0.0:5222

  mode tcp

  option tcplog #采用tcp日志格式

  balance source

  #log 127.0.0.1 local0 debug

  server s1 192.168.100.204:7222 weight 1

  server s2 192.168.100.208:7222 weight 1

---------------------------------------------------------------------------------------------

第3章 Keepalived安装配置

3.1 Keepalived介绍

keepalived是一个免费开源的,用C编写的类似于layer3, 4 & 7交换机制软件,具备我们平时说的第3层、第4层和第7层交换机的功能。主要提供loadbalancing(负载均衡)和 high-availability(高可用)功能,负载均衡实现需要依赖Linux的虚拟服务内核模块(ipvs),而高可用是通过VRRP协议实现多台机器之间的故障转移服务。 

 

3.2 配置防火墙

3.2.1 Centos7-firewalld解决keepalived的VIP问题

keepalived的VIP问题

firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --in-interface eth0 --destination 224.0.0.18 --protocol vrrp -j ACCEPT

 

3.2.2 firewalld 防火墙常用操作

语法命令如下:启用区域端口和协议组合

firewall-cmd [--zone=<zone>] --add-port=<port>[-<port>]/<protocol> [--timeout=<seconds>]

 

此举将启用端口和协议的组合。

端口可以是一个单独的端口 <port> 或者是一个端口范围 <port>-<port>。

协议可以是 tcp 或 udp。

 

查看 firewalld 状态

systemctl status firewalld

 

开启 firewalld

systemctl start firewalld

 

开放端口

// --permanent 永久生效,没有此参数重启后失效

firewall-cmd --zone=public --add-port=80/tcp --permanent

firewall-cmd --zone=public --add-port=9090/tcp --permanent

firewall-cmd --zone=public --add-port=1000-2000/tcp --permanent

 

keepalived的VIP问题

firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --in-interface eth0 --destination 224.0.0.18 --protocol vrrp -j ACCEPT

 

重新载入

firewall-cmd --reload

 

查看

firewall-cmd --zone=public --query-port=80/tcp

 

删除

firewall-cmd --zone=public --remove-port=80/tcp --permanent

 

iptables 防火墙

也可以还原传统的管理方式使用 iptables

systemctl stop firewalld

systemctl mask firewalld

 

安装 iptables-services

yum install iptables-services

 

设置开机启动

systemctl enable iptables

 

操作命令

systemctl stop iptables

systemctl start iptables

systemctl restart iptables

systemctl reload iptables

 

保存设置

service iptables save

 

开放某个端口 在 /etc/sysconfig/iptables 里添加

-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT

3.3 安装keepalived

yum install keepalived –y

systemctl enable keepalived

 

3.4 配置文件

3.4.1 Master节点

3.4.1.1  keepalived.conf

[root@lb01 ~]# cat /etc/keepalived/keepalived.conf

global_defs

   router_id LB01

vrrp_script chk_haproxy

     script "/etc/keepalived/scripts/haproxy_check.sh"

     interval 2

     timeout 2

     fall 3

vrrp_instance haproxy

    state MASTER

    interface eth0

    virtual_router_id 1

    priority  100   

    authentication         

            auth_type PASS        

            auth_pass password    

      

    virtual_ipaddress

           192.168.25.229

   

    track_script

         chk_haproxy

   

    notify_master "/etc/keepalived/scripts/haproxy_master.sh"

3.4.1.2  haproxy_check.sh

[root@lb01 ~]# cat /etc/keepalived/scripts/haproxy_check.sh

#!/bin/bash

LOGFILE="/var/log/keepalived-haproxy-state.log"

 date >>$LOGFILE

 if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then

     echo "fail: check_haproxy status" >>$LOGFILE

     exit 1

 else

     echo "success: check_haproxy status" >>$LOGFILE

     exit 0

fi

3.4.1.3  haproxy_master.sh

[root@lb01 ~]# cat /etc/keepalived/scripts/haproxy_master.sh

   #!/bin/bash

     LOGFILE="/var/log/keepalived-haproxy-state.log"

     echo "Being Master ..." >> $LOGFILE

 

3.4.2 Backup节点

3.4.2.1  keepalived.conf

[root@lb02 ~]# cat /etc/keepalived/keepalived.conf

global_defs

   router_id LB02

vrrp_script chk_haproxy

     script "/etc/keepalived/scripts/haproxy_check.sh"

     interval 2

     timeout 2

     fall 3

vrrp_instance haproxy

    state BACKUP

    interface eth0

    virtual_router_id 1

    priority  50   

    authentication         

          auth_type PASS        

          auth_pass password    

            

    virtual_ipaddress

          192.168.25.229

   

    track_script

         chk_haproxy

   

    notify_master "/etc/keepalived/scripts/haproxy_master.sh"

 

[root@lb02 ~]#

3.4.2.2  haproxy_check.sh

[root@lb01 ~]# cat /etc/keepalived/scripts/haproxy_check.sh

#!/bin/bash

LOGFILE="/var/log/keepalived-haproxy-state.log"

 date >>$LOGFILE

 if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then

     echo "fail: check_haproxy status" >>$LOGFILE

     exit 1

 else

     echo "success: check_haproxy status" >>$LOGFILE

     exit 0

fi

[root@lb01 ~]#

3.4.2.3haproxy_master.sh

[root@lb01 ~]# cat /etc/keepalived/scripts/haproxy_master.sh

   #!/bin/bash

     LOGFILE="/var/log/keepalived-haproxy-state.log"

     echo "Being Master ..." >> $LOGFILE

[root@lb01 ~]#

3.5 启动keepalived

3.5.1 启动服务

#keepalived –D

systemctl start keepalived

3.5.2 查看keepalived是否启动

 

[root@lb01 ~]# ps -ef|grep kee

root     24290     1  0 10:59 ?        00:00:00 /usr/sbin/keepalived -D

root     24291 24290  0 10:59 ?        00:00:00 /usr/sbin/keepalived -D

root     24292 24290  0 10:59 ?        00:00:00 /usr/sbin/keepalived -D

root     28622 13717  0 11:17 pts/1    00:00:00 grep --color=auto kee

[root@lb01 ~]#

3.5.3 vip检查




3.6 切换测试

3.6.1 关闭主keepalived

Master关闭keepalived服务,VIP是否切换至备机,业务是否正常,恢复原状;(验证keepalived高可用)

Master

Keepalived_ 日志

 

[root@lb01 ~]# systemctl stop keepalived

Nov 22 11:19:49 lb01 systemd: Stopping LVS and VRRP High Availability Monitor...

Nov 22 11:19:49 lb01 Keepalived[24290]: Stopping

Nov 22 11:19:49 lb01 Keepalived_vrrp[24292]: VRRP_Instance(haproxy) sent 0 priority

Nov 22 11:19:49 lb01 Keepalived_vrrp[24292]: VRRP_Instance(haproxy) removing protocol VIPs.

Nov 22 11:19:49 lb01 Keepalived_healthcheckers[24291]: Stopped

Nov 22 11:19:50 lb01 Keepalived_vrrp[24292]: Stopped

Nov 22 11:19:50 lb01 Keepalived[24290]: Stopped Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2

Nov 22 11:19:50 lb01 systemd: Stopped LVS and VRRP High Availability Monitor.

 

 

Backup

[root@lb02 ~]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 52:54:00:0b:29:c6 brd ff:ff:ff:ff:ff:ff

    inet 192.168.25.72/16 brd 192.168.255.255 scope global eth0

       valid_lft forever preferred_lft forever

    inet 192.168.25.229/32 scope global eth0

       valid_lft forever preferred_lft forever

    inet6 fe80::5054:ff:fe0b:29c6/64 scope link

       valid_lft forever preferred_lft forever

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 52:54:00:1a:83:4d brd ff:ff:ff:ff:ff:ff

    inet6 fe80::5054:ff:fe1a:834d/64 scope link

       valid_lft forever preferred_lft forever

Keepalived_ 日志

Nov 22 11:19:51 lb02 Keepalived_vrrp[26670]: VRRP_Instance(haproxy) Entering MASTER STATE

Nov 22 11:19:51 lb02 Keepalived_vrrp[26670]: VRRP_Instance(haproxy) setting protocol VIPs.

Nov 22 11:19:51 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:19:51 lb02 Keepalived_vrrp[26670]: VRRP_Instance(haproxy) Sending/queueing gratuitous ARPs on eth0 for 192.168.25.229

Nov 22 11:19:51 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:19:51 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:19:51 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:19:51 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:19:56 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:19:56 lb02 Keepalived_vrrp[26670]: VRRP_Instance(haproxy) Sending/queueing gratuitous ARPs on eth0 for 192.168.25.229

Nov 22 11:19:56 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:19:56 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:19:56 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:19:56 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

3.6.2 关闭主HAproxy

关闭主HAproxy ,VIP是否切换至备机,业务是否正常,恢复原状;(验证HAproxy高可用)

Master

系统日志

Nov 22 11:29:43 lb01 systemd: Stopping HAProxy Load Balancer...

Nov 22 11:29:43 lb01 systemd: haproxy.service: main process exited, code=exited, status=143/n/a

Nov 22 11:29:43 lb01 systemd: Stopped HAProxy Load Balancer.

Nov 22 11:29:43 lb01 systemd: Unit haproxy.service entered failed state.

Nov 22 11:29:43 lb01 systemd: haproxy.service failed.

Nov 22 11:29:44 lb01 Keepalived_vrrp[29295]: /etc/keepalived/scripts/haproxy_check.sh exited with status 1

Nov 22 11:29:46 lb01 Keepalived_vrrp[29295]: /etc/keepalived/scripts/haproxy_check.sh exited with status 1

Nov 22 11:29:48 lb01 Keepalived_vrrp[29295]: /etc/keepalived/scripts/haproxy_check.sh exited with status 1

Nov 22 11:29:48 lb01 Keepalived_vrrp[29295]: VRRP_Script(chk_haproxy) failed

Nov 22 11:29:48 lb01 Keepalived_vrrp[29295]: VRRP_Instance(haproxy) Entering FAULT STATE

Nov 22 11:29:48 lb01 Keepalived_vrrp[29295]: VRRP_Instance(haproxy) removing protocol VIPs.

Nov 22 11:29:48 lb01 Keepalived_vrrp[29295]: VRRP_Instance(haproxy) Now in FAULT state

Nov 22 11:29:50 lb01 Keepalived_vrrp[29295]: /etc/keepalived/scripts/haproxy_check.sh exited with status 1

Nov 22 11:29:52 lb01 Keepalived_vrrp[29295]: /etc/keepalived/scripts/haproxy_check.sh exited with status 1

Nov 22 11:29:54 lb01 Keepalived_vrrp[29295]: /etc/keepalived/scripts/haproxy_check.sh exited with status 1

 

Backup

系统日志

Nov 22 11:29:49 lb02 Keepalived_vrrp[26670]: VRRP_Instance(haproxy) Transition to MASTER STATE

Nov 22 11:29:50 lb02 Keepalived_vrrp[26670]: VRRP_Instance(haproxy) Entering MASTER STATE

Nov 22 11:29:50 lb02 Keepalived_vrrp[26670]: VRRP_Instance(haproxy) setting protocol VIPs.

Nov 22 11:29:50 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:29:50 lb02 Keepalived_vrrp[26670]: VRRP_Instance(haproxy) Sending/queueing gratuitous ARPs on eth0 for 192.168.25.229

Nov 22 11:29:50 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:29:50 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:29:50 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:29:50 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:29:55 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:29:55 lb02 Keepalived_vrrp[26670]: VRRP_Instance(haproxy) Sending/queueing gratuitous ARPs on eth0 for 192.168.25.229

Nov 22 11:29:55 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:29:55 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:29:55 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Nov 22 11:29:55 lb02 Keepalived_vrrp[26670]: Sending gratuitous ARP on eth0 for 192.168.25.229

Vip检查

Master

[root@lb01 ~]# ip a|grep 192.168.25.229

[root@lb01 ~]#

Backup

[root@lb02 ~]#  ip a|grep 192.168.25.229

    inet 192.168.25.229/32 scope global eth0

3)关闭后台服务器nginx 01,业务是否正常。(验证HAproxy状态检查)

[root@lb03 ~]# systemctl stop nginx


Nov 22 11:34:50 localhost haproxy[31563]: Server nginx/nginx1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.

[root@lb03 ~]# systemctl start nginx

Nov 22 11:36:10 localhost haproxy[31563]: Server nginx/nginx1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.

keepalived参考:http://blog.51cto.com/lanlian/1303195

haproxy监控页面配置参考:http://blog.csdn.net/dylan_csdn/article/details/51261421

 

以上是关于CentOS7 haproxy+keepalived实现高可用集群搭建的主要内容,如果未能解决你的问题,请参考以下文章

haproxy+keepalive负载均衡环境部署(主主模式)

阿里云ecs能搭建haproxy+keepalive吗

keepalive高可用haproxy实现URL资源的动静分离

RabbitMQ---集群,Haproxy+Keepalive 实现高可用负载均衡,Federation Exchange和Federation Queue

基于haproxy+keepalive+varnish实现lnmp企业级架构

高可用pxc+rocketmq+es+redis+minio+keepalive+haproxy 实操