轻量级高可用软件keepalived

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了轻量级高可用软件keepalived相关的知识,希望对你有一定的参考价值。

    keepalived是一款用C编写的,旨在给linux系统和基于linux的设施提供简单、稳固的高可用和负载均衡功能的软件。它基于linux内核的ipvs模块实现4层负载均衡基于VRRP协议实现服务的高可用


一、VRRP协议

技术分享

   VRRP(Virtual Router Redundancy Protocol,虚拟路由冗余协议)是一种容错协议。通常,一个网络内的所有主机都设置一条默认路由,这样,主机发出的目的地址不在本网段的报文将被通过默认路由发往路由器RouterA,从而实现了主机与外部网络的通信。当路由器RouterA 坏掉时,本网段内所有以RouterA 为默认路由下一跳的主机将无法与外部通信,这就是单点故障。VRRP就是为解决上述问题而提出的。

   VRRP 将局域网的一组路由器组织成一个虚拟路由器,称之为一个备份组。这个虚拟的路由器拥有自己的IP地址(vip,也就是路由器所在局域网内其他机器的默认路由)和MAC地址(vmac),占有这个IP的物理路由器作为master实际负责ARP响应和数据包转发,组中的其它路由器作为备份(backup)的角色处于待命状态。master会发组播消息,当backup在超时时间内收不到vrrp包时就认为master宕掉了,这时就需要根据VRRP的优先级来选举一个backup充当新的master,继续向网络内的主机提供路由服务。从而实现网络内的主机不间断地与外部网络进行通信。

   每个Router都有一个 1-255 之间的优先级,级别最高的(highest priority)将成为master路由器,若优先级相同,则IP地址较大者胜出。通过降低master的优先级可以让处于backup状态的路由器抢占主路由器的状态,接管虚拟IP。


二、keepalived的组件

技术分享

   keepalived主要有三大模块:

     core:keepalived的核心,负责主进程的启动和维护,全局配置文件的加载解析等;
     check:负责健康状态检测,包括各种常见的健康检查方式;
     vrrp:实现VRRP协议;


三、keepalive的适用场景

     keepalived理论上可以为mysqld,httpd等服务提供高可用,给这些服务做高可用需要配置共享存储资源,对此keepalived需要借助额外的命令或脚本实现,这种情况下keepalived的性能是显然不如heartbeat或corosync的。keepalived适合提供轻量级的高可用方案如对作为反向代理的nginx、haproxy以及ipvs做高可用,给这些服务做高可用都无需配置共享存储资源对ipvs做高可用时,能运用多种检测机制获知后端real server的健康状态,动态和自适应地维护、管理服务器池。


四、使用keepalived做反向代理nginx的高可用

  1、实验拓扑图

技术分享


  2、在node3上安装httpd,在node1和node2上安装nginx并配置为反向代理服务器,代理至node3;另外,要确保做高可用的两个节点时间同步。

[[email protected] ~]# yum -y install httpd
...
[[email protected] ~]# vim /var/www/html/test.html  #创建一个测试页面

hello,keepalived
[[email protected] ~]# service httpd start
...
[[email protected] ~]# yum -y install nginx;ssh [email protected] ‘yum -y install nginx‘
...
[[email protected] ~]# vim /etc/nginx/conf.d/default.conf
...
    location ~ \.html$ {
        proxy_pass http://192.168.30.13;
    }
...
[[email protected] ~]# scp /etc/nginx/conf.d/default.conf [email protected]:/etc/nginx/conf.d/
...

  3、在node1和node2上安装配置keepalived

      yum -y install keepalived

      主配置文件:/etc/keepalived/keepalived.conf

      查看配置文件帮助信息:man keepalived.conf

      keepalived的日志信息会输出至/var/log/messages

[[email protected] ~]# yum -y install keepalived;ssh [email protected] ‘yum -y install keepalived‘
...
[[email protected] ~]# rpm -ql keepalived
/etc/keepalived
/etc/keepalived/keepalived.conf   #配置文件
/etc/rc.d/init.d/keepalived
/etc/sysconfig/keepalived
/usr/bin/genhash
/usr/libexec/keepalived
/usr/sbin/keepalived
...
/usr/share/doc/keepalived-1.2.13/samples   #该目录下有一些样例文件,供参考
...
[[email protected] ~]# ls /usr/share/doc/keepalived-1.2.13/samples
keepalived.conf.fwmark         keepalived.conf.misc_check_arg  keepalived.conf.status_code           keepalived.conf.vrrp.localcheck        keepalived.conf.vrrp.sync
keepalived.conf.HTTP_GET.port  keepalived.conf.quorum          keepalived.conf.track_interface       keepalived.conf.vrrp.lvs_syncd         sample.misccheck.smbcheck.sh
keepalived.conf.inhibit        keepalived.conf.sample          keepalived.conf.virtualhost           keepalived.conf.vrrp.routes
keepalived.conf.IPv6           keepalived.conf.SMTP_CHECK      keepalived.conf.virtual_server_group  keepalived.conf.vrrp.scripts
keepalived.conf.misc_check     keepalived.conf.SSL_GET         keepalived.conf.vrrp                keepalived.conf.vrrp.static_ipaddress
[[email protected] ~]# cd /etc/keepalived/
[[email protected] keepalived]# mv keepalived.conf keepalived.conf.back
[[email protected] keepalived]# vim keepalived.conf

! Configuration File for keepalived   #keepalived配置文件中,以“!”开头的为注释信息

global_defs {    #该区域主要是配置故障发生时的通知对象以及机器标识
   notification_email {   #指定keepalived在发生事件(如切换、故障)时发送email给谁,多个写多行
      [email protected]   #收件人
      [email protected] 
   }
   notification_email_from [email protected]   #发件人
   smtp_connect_timeout 10   #连接smtp服务器的超时时长
   smtp_server 127.0.0.1   #smtp服务器地址
   router_id nginx-node1   #机器标识;通常为hostname,但不是必须为hostname
}

vrrp_script chk_nginx {   #健康状态检测
    script "killall -0 nginx"   #信号0用来判断进程是否存在;脚本返回非0表示失败
    interval 1   #检测的间隔时长
    weight -2   #确定失败则将优先级减2
    fall 3   #确定为失败需要检测的次数
    rise 1  #确定为正常需要检测的次数
}

# 说明:上面配置的健康状态检测机制是有缺陷的,它仅能获知服务的运行状态,但实际上当服务异常时并不能实现资源正常、合理地转移。此段不用配置

vrrp_script chk_mantaince_down {
   script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
# 通过手动创建文件的方式使节点降级
   interval 1
   weight -2
   fall 3
   rise 1
}

vrrp_instance VI_1 {   #定义一个vrrp实例,可定义多个,每个vrrp实例名称要惟一
    interface eth0   #将虚拟ip绑定在哪个接口上
    state MASTER   #指定节点的初始状态;但这里指定的不算,keepalived启动时根据节点的优先级高低来确定节点的角色
    priority 100   #优点级,1-255
    virtual_router_id 11   #虚拟路由id,用来区分多个instance的VRRP组播
    garp_master_delay 1   #当切为master状态后多久更新ARP缓存,默认5秒

    authentication {   #认证区域,认证类型有PASS和HA(IPSEC)
        auth_type PASS
        auth_pass magedu
    }
    track_interface {   #跟踪接口的状态
       eth0
    }
    virtual_ipaddress {
        192.168.30.30/24 dev eth0 label eth0:0   #虚拟IP地址,会添加在master上
    }
    track_script {   #引用vrrp脚本,即在 vrrp_script 部分指定的名称
        chk_nginx
        chk_mantaince_down
    }

    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
    # notify_master/backup/fault 分别表示节点状态为主/备/出错时所执行的脚本
}

# 其它常用参数:
    nopreempt 表示不抢占。允许一个priority较低的节点保持master状态,即使priority更高的节点恢复正常
       要配置非抢占模式,在优先级较高节点的配置文件中:
           state BACKUP
           nopreempt
    use_vmac 是否使用VRRP的虚拟MAC地址
    notify 表示任何一状态切换时都会调用该脚本
    smtp_alert 表示开启邮件通知(用全局区域的邮件设置来发通知)

[[email protected] keepalived]# vim notify.sh   #创建一个通知脚本

#!/bin/bash
# Author: MageEdu <[email protected]>
# description: An example of notify script
# 

vip=192.168.30.30
contact=‘[email protected]‘

notify() {
    mailsubject="`hostname` to be $1: $vip floating"
    mailbody="`date ‘+%F %H:%M:%S‘`: vrrp transition, `hostname` changed to be $1"
    echo $mailbody | mail -s "$mailsubject" $contact
}

case "$1" in
    master)
        notify master
        /etc/rc.d/init.d/nginx start
        exit 0
    ;;
    backup)
        notify backup
        /etc/rc.d/init.d/nginx stop
        exit 0
    ;;
    fault)
        notify fault
        /etc/rc.d/init.d/nginx stop
        exit 0
    ;;
    *)
        echo ‘Usage: `basename $0` {master|backup|fault}‘
        exit 1
    ;;
esac
[[email protected] keepalived]# chmod +x notify.sh
[[email protected] keepalived]# scp keepalived.conf notify.sh [email protected]:/etc/keepalived/
[[email protected] ~]# ls /etc/keepalived/
keepalived.conf  keepalived.conf.back  notify.sh
[[email protected] ~]# vim /etc/keepalived/keepalived.conf   #对node2上的keepalived配置文件做适当修改
...
router_id nginx-node2
...
state BACKUP
priority 99   #从节点优点级较低
...

[[email protected] ~]# service keepalived start   #先尝试启动从节点
Starting keepalived:                           [  OK  ]
[[email protected] ~]# tail -f /var/log/messages
...
May 26 21:33:52 node2 Keepalived_vrrp[46904]: VRRP_Instance(VI_1) Entering BACKUP STATE
#刚开始按照配置文件中的设定进入backup状态
May 26 21:33:52 node2 Keepalived_vrrp[46904]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
May 26 21:33:52 node2 Keepalived_vrrp[46904]: VRRP_Script(chk_mantaince_down) succeeded
May 26 21:33:56 node2 Keepalived_vrrp[46904]: VRRP_Instance(VI_1) Transition to MASTER STATE
#因node1上的keepalived还没启动,所以node2会转换为master状态
May 26 21:33:57 node2 Keepalived_vrrp[46904]: VRRP_Instance(VI_1) Entering MASTER STATE
May 26 21:33:57 node2 Keepalived_vrrp[46904]: VRRP_Instance(VI_1) setting protocol VIPs.
May 26 21:33:57 node2 Keepalived_healthcheckers[46902]: Netlink reflector reports IP 192.168.30.30 added
May 26 21:33:57 node2 Keepalived_vrrp[46904]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.30.30
May 26 21:33:57 node2 Keepalived_vrrp[46904]: VRRP_Script(chk_nginx) succeeded
May 26 21:33:58 node2 Keepalived_vrrp[46904]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.30.30
...
[[email protected] keepalived]# ip addr show   #可以看到vip已配置
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:bd:68:23 brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.20/24 brd 192.168.30.255 scope global eth0
    inet 192.168.30.30/24 scope global secondary eth0:0
    inet6 fe80::20c:29ff:febd:6823/64 scope link 
       valid_lft forever preferred_lft forever
[[email protected] ~]# service keepalived start
Starting keepalived:                                       [  OK  ]
[[email protected] ~]# tail -f /var/log/messages
...
May 26 21:34:37 node1 Keepalived_vrrp[43127]: VRRP_Instance(VI_1) Transition to MASTER STATE
May 26 21:34:37 node1 Keepalived_vrrp[43127]: VRRP_Instance(VI_1) Received lower prio advert, forcing new election
#收到较低的优先级通告,强制进行新的选举
May 26 21:34:38 node1 Keepalived_vrrp[43127]: VRRP_Instance(VI_1) Entering MASTER STATE
#进入master状态
May 26 21:34:38 node1 Keepalived_vrrp[43127]: VRRP_Instance(VI_1) setting protocol VIPs.
#设置vip
May 26 21:34:38 node1 Keepalived_healthcheckers[43126]: Netlink reflector reports IP 192.168.30.30 added
May 26 21:34:38 node1 Keepalived_vrrp[43127]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.30.30
May 26 21:34:39 node1 Keepalived_vrrp[43127]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.30.30
[[email protected] ~]# ip addr show   #可以看到node1已抢到vip
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:40:35:9d brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.10/24 brd 192.168.30.255 scope global eth0
    inet 192.168.30.30/24 scope global secondary eth0:0
    inet6 fe80::20c:29ff:fe40:359d/64 scope link 
       valid_lft forever preferred_lft forever
[[email protected] ~]# service nginx status   #nginx已正常启动
nginx (pid  53134) is running...
[[email protected] ~]# mail   #已收到角色切换的消息
Heirloom Mail version 12.4 7/29/08.  Type ? for help.
"/var/spool/mail/root": 1 message 1 new
>N  1 root                  Thu May 26 21:35  20/732   "node1 to be master: 192.168.30.30 floating"
[[email protected] ~]# tail -f /var/log/messages   #node2上配置的vip已被移除
...
May 26 21:34:37 node2 Keepalived_vrrp[46904]: VRRP_Instance(VI_1) Received higher prio advert
May 26 21:34:37 node2 Keepalived_vrrp[46904]: VRRP_Instance(VI_1) Entering BACKUP STATE
May 26 21:34:37 node2 Keepalived_vrrp[46904]: VRRP_Instance(VI_1) removing protocol VIPs.
May 26 21:34:37 node2 Keepalived_healthcheckers[46902]: Netlink reflector reports IP 192.168.30.30 removed
May 26 21:34:39 node2 Keepalived_vrrp[46904]: VRRP_Script(chk_nginx) failed

  4、测试

[[email protected] ~]# curl 192.168.30.30/test.html
hello,keepalived

    模拟资源转移:

[[email protected] keepalived]# service keepalived stop
Stopping keepalived:                                       [  OK  ]
[[email protected] keepalived]# ip addr show
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:40:35:9d brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.10/24 brd 192.168.30.255 scope global eth0
    inet6 fe80::20c:29ff:fe40:359d/64 scope link 
       valid_lft forever preferred_lft forever
[[email protected] keepalived]# service nginx status
nginx is stopped
[[email protected] keepalived]# ip addr show
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:bd:68:23 brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.20/24 brd 192.168.30.255 scope global eth0
    inet 192.168.30.30/24 scope global secondary eth0:0   #vip已转到node2上
    inet6 fe80::20c:29ff:febd:6823/64 scope link 
       valid_lft forever preferred_lft forever
You have new mail in /var/spool/mail/root
[[email protected] keepalived]# service nginx status
nginx (pid  11412) is running...
[[email protected] ~]# curl 192.168.30.30/test.html
hello,keepalived


五、使用keepalived做ipvs的高可用,采用双主模型

  1、实验拓扑图

技术分享

  2、配置好两个后端real server

[[email protected] ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore 
[[email protected] ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore 
[[email protected] ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce 
[[email protected] ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce 
[[email protected] ~]# ifconfig lo:0 192.168.30.31 netmask 255.255.255.255 broadcast 192.168.30.31 up
[[email protected] ~]# ifconfig lo:1 192.168.30.32 netmask 255.255.255.255 broadcast 192.168.30.32 up
[[email protected] ~]# route add -host 192.168.30.31 dev lo:0
[[email protected] ~]# route add -host 192.168.30.32 dev lo:1
[[email protected] ~]# cd /var/www/html
[[email protected] html]# ls
[[email protected] html]# vim index.html

hello
[[email protected] html]# vim test.html

hello,this is node3
[[email protected] html]# service httpd start
...
# 在node4上执行类似的步骤

  3、对ipvs做双主模型的高可用,并将两个ipvs节点作为后端RS出现故障时的备用服务器

    要配置双主模型,需要配置两个vrrp实例,两个节点在一个实例中的角色为分别为主/从,在另一个实例中则刚好反过来,为从/主

[[email protected] ~]# yum -y install httpd;ssh [email protected] ‘yum -y install httpd‘
...
[[email protected] ~]# vim /var/www/html/test.html
fallback1
[[email protected] ~]# service httpd start
...
[[email protected] ~]# cd /etc/keepalived/
[[email protected] keepalived]# vim keepalived.conf

! Configuration File for keepalived

global_defs {
   notification_email {
         [email protected]
         [email protected]
   }
   notification_email_from [email protected]
   smtp_connect_timeout 10
   smtp_server 127.0.0.1
   router_id LVS_DEVEL
}

vrrp_script chk_schedown1 {
   script "[[ -f /etc/keepalived/down1 ]] && exit 1 || exit 0"
   interval 2
   weight -2
}

vrrp_script chk_schedown2 {
   script "[[ -f /etc/keepalived/down2 ]] && exit 1 || exit 0"
   interval 2
   weight -2
}

vrrp_instance VI_1 {   #第一个vrrp实例
    interface eth0
    state MASTER   #将node1配置为第一个实例中的master
    priority 100
    virtual_router_id 51
    garp_master_delay 1

    authentication {
        auth_type PASS
        auth_pass magedu
    }

    track_interface {
       eth0
    }

    virtual_ipaddress {
        192.168.30.31/24 dev eth0 label eth0:0   #vip1
    }

    track_script {
        chk_schedown1
    }
}

virtual_server 192.168.30.31 80 {   #定义一个集群服务
    delay_loop 6
    lb_algo rr   #lvs调度算法
    lb_kind DR   #lvs模型
    persistence_timeout 30   #持久连接
    protocol TCP

   sorry_server 127.0.0.1 80   #定义后端real server出现故障时的备用服务器

    real_server 192.168.30.13 80 {   #定义后端RS
        weight 1
        HTTP_GET {   #定义后端RS健康状态检测的方式
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
# RS健康状态检测方式有多种:HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK,具体用法可man keepalived.conf
# 如果要使用TCP_CHECK检测各real server的健康状态,那么,上面HTTP_GET部分的定义可以替换为如下内容:
#      TCP_CHECK {
#	    connect_port 80
#           connect_timeout 3
#       }
          
    real_server 192.168.30.14 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

vrrp_instance VI_2 {   #第二个vrrp实例
    interface eth0
    state BACKUP
    priority 99
    virtual_router_id 52   #每个vrrp实例的虚拟路由ID必须惟一
    garp_master_delay 1

    authentication {
        auth_type PASS
        auth_pass magedu
    }

    track_interface {
       eth0
    }

    virtual_ipaddress {
        192.168.30.32/24 dev eth0 label eth0:1   #vip2
    }

    track_script {
        chk_schedown2
    }
}

virtual_server 192.168.30.32 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 30
    protocol TCP

    sorry_server 127.0.0.1 80

    real_server 192.168.30.13 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.30.14 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

[[email protected] keepalived]# scp keepalived.conf [email protected]:/etc/keepalived/
keepalived.conf                                          100% 2694     2.6KB/s   00:00
[[email protected] ~]# vim /var/www/html/test.html
fallback2
[[email protected] ~]# service httpd start
...
[[email protected] ~]# cd /etc/keepalived/
[[email protected] keepalived]# vim keepalived.conf   #对node2上的keepalived配置文件做适当修改
...
vrrp_instance VI_1 {
...
    state BACKUP
    priority 99
...
}
...
vrrp_instance VI_2 {
...
    state MASTER
    priority 100
...
}
[[email protected] keepalived]# service keepalived start;ssh [email protected] ‘service keepalived start‘
Starting keepalived:                                       [  OK  ]
Starting keepalived: [  OK  ]

[[email protected] keepalived]# yum -y install ipvsadm;ssh [email protected] ‘yum -y install ipvsadm‘
...
#安装ipvsadm只是便于查看ipvs规则,非必须
[[email protected] keepalived]# ip addr show   #因node2在第二个vrrp实例中优先级列高,故vip2已配置在node2上
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:bd:68:23 brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.20/24 brd 192.168.30.255 scope global eth0
    inet 192.168.30.32/24 scope global secondary eth0:1
    inet6 fe80::20c:29ff:febd:6823/64 scope link 
       valid_lft forever preferred_lft forever
[[email protected] keepalived]# ipvsadm -L -n   #ipvs规则已生成
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.30.31:80 rr persistent 30
  -> 192.168.30.13:80             Route   1      0          0         
  -> 192.168.30.14:80             Route   1      0          0         
TCP  192.168.30.32:80 rr persistent 30
  -> 192.168.30.13:80             Route   1      0          0         
  -> 192.168.30.14:80             Route   1      0          0
[[email protected] keepalived]# ip addr show   #vip1已配置在node1上
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:40:35:9d brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.10/24 brd 192.168.30.255 scope global eth0
    inet 192.168.30.31/24 scope global secondary eth0:0
    inet6 fe80::20c:29ff:fe40:359d/64 scope link 
       valid_lft forever preferred_lft forever
[[email protected] keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.30.31:80 rr persistent 30
  -> 192.168.30.13:80             Route   1      0          0         
  -> 192.168.30.14:80             Route   1      0          0         
TCP  192.168.30.32:80 rr persistent 30
  -> 192.168.30.13:80             Route   1      0          0         
  -> 192.168.30.14:80             Route   1      0          0

  4、测试

[[email protected] ~]# curl 192.168.30.31/test.html   #因为启用了持久连接,所以请求可能在一段时间内始终被定向至同一RS
hello,this is node4
[[email protected] ~]# curl 192.168.30.31/test.html
hello,this is node4
[[email protected] ~]# curl 192.168.30.31/test.html
hello,this is node3
[[email protected] ~]# curl 192.168.30.32/test.html
hello,this is node3
[[email protected] ~]# curl 192.168.30.32/test.html
hello,this is node3
[[email protected] html]# service httpd stop   #将一个RS上的服务停掉
Stopping httpd:                                            [  OK  ]
[[email protected] keepalived]# ipvsadm -L -n   #keepalived探测到后端RS的异常状态,将故障RS移除
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.30.31:80 rr persistent 30
  -> 192.168.30.13:80             Route   1      0          0         
TCP  192.168.30.32:80 rr persistent 30
  -> 192.168.30.13:80             Route   1      0          1
[[email protected] ~]# curl 192.168.30.31/test.html
hello,this is node3
[[email protected] ~]# curl 192.168.30.31/test.html
hello,this is node3
[[email protected] ~]# curl 192.168.30.32/test.html
hello,this is node3
[[email protected] html]# service httpd stop   #将另一个RS上的服务也停掉
Stopping httpd:                                            [  OK  ]
[[email protected] ~]# ipvsadm -L -n   #可以看到sorry server已被添加进ipvs规则中
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.30.31:80 rr persistent 30
  -> 127.0.0.1:80                 Local   1      0          0         
TCP  192.168.30.32:80 rr persistent 30
  -> 127.0.0.1:80                 Local   1      0          0
[[email protected] ~]# curl 192.168.30.31/test.html
fallback1
[[email protected] ~]# curl 192.168.30.32/test.html
fallback2


以上是关于轻量级高可用软件keepalived的主要内容,如果未能解决你的问题,请参考以下文章

5 keepalived高可用ipvs(主备模式)

架构之高可用性(HA)集群(Keepalived)

Keepalived+nginx实现高可用

高可用篇之Keepalived (HAProxy+keepalived 搭建高可用负载均衡集群)

Keepalived高可用集群。

keepalived高可用