LVS/DR + keepalived负载均衡实现

Posted 运维讲堂

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了LVS/DR + keepalived负载均衡实现相关的知识,希望对你有一定的参考价值。

一、keepalived简介

keepalived是分布式部署解决系统高可用的软件,结合lvs(LinuxVirtual Server)使用,解决单机宕机的问题。
keepalived是一个基于VRRP协议来实现IPVS的高可用的解决方案。对于LVS负载均衡来说,如果前端的调度器direct发生故障,则后端的realserver是无法接受请求并响应的。因此,保证前端direct的高可用性是非常关键的,否则后端的服务器是无法进行服务的。而我们的keepalived就可以用来解决单点故障(如LVS的前端direct故障)问题。keepalived的主要工作原理是:运行keepalived的两台服务器,其中一台为MASTER,另一台为BACKUP,正常情况下,所有的数据转换功能和ARP请求响应都是由MASTER完成的,一旦MASTER发生故障,则BACKUP会马上接管MASTER的工作,这种切换时非常迅速的。

二、测试环境

下面拿4台虚拟机进行环境测试,实验环境为centos6.6 x86_64,具体用途和ip如下

服务器类型

Lvs VIP

192.168.214.89

Keepalived Master

192.168.214.85

Keepalived Backup

192.168.214.86

            Realserver A

192.168.214.87

            Realserver B

192.168.214.88

 

三、软件安装

1、安装lvs所需包ipvsadm

yum install -y ipvsadm

ln -s /usr/src/kernels/`uname -r` /usr/src/linux

lsmod |grep ip_vs

 

#注意Centos 6.X安装lvs,使用1.26版本。并且需要先安装yuminstall libnl* popt* -y

 

执行ipvsadm(modprobe ip_vs)ip_vs模块加载到内核

[root@test85 ~]# ipvsadm -L -n

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

 -> RemoteAddress:Port          Forward Weight ActiveConn InActConn

 

#IP Virtual Server version 1.2.1 ---- ip_vs内核模块版本

 

 

2、安装keepalived

yum install -y keepalived

chkconfig keepalived on

注:在centos7系列系统中开机自动启动使用systemctl enable keepalived


四、keepalived配置

先看下214.85keepalived主机上的配置

再看下214.86keepalived备机上的配置

! Configuration File for keepalived

global_defs {
   notification_email {
     charles@test.com
   }
   notification_email_from reportlog@test.com
   smtp_server mail.test.com
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_sync_group VG1 {
   group {
      VI_1
   }
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    lvs_sync_daemon_inteface eth0
    virtual_router_id 55
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.214.89
    }
}

virtual_server 192.168.214.89 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    #nat_mask 255.255.255.0
    persistence_timeout 10
    protocol TCP

    real_server 192.168.214.87 80 {
        weight 100
       TCP_CHECK {
            connect_timeout 3
            connect_port 80
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.214.88 80 {
        weight 100
       TCP_CHECK {
            connect_timeout 3
            connect_port 80
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}


五、后端realserver操作

DR模式需要在后端真实机上运行以下脚本

#!/bin/bash
# description: Config realserver lo
#Written by :Charles

VIP1=192.168.214.89
. /etc/rc.d/init.d/functions

case “$1” in
start)
       ifconfig lo:0 $VIP1 netmask 255.255.255.255 broadcast $VIP1
       /sbin/route add –host $VIP1 dev lo:0
       echo “1” >/proc/sys/net/ipv4/conf/lo/arp_ignore
       echo “2” >/proc/sys/net/ipv4/conf/lo/arp_announce
       echo “1” >/proc/sys/net/ipv4/conf/all/arp_ignore
       echo “2” >/proc/sys/net/ipv4/conf/all/arp_announce
       sysctl –p >/dev/null 2>&1
       echo “RealServer Start OK”
       ;;
stop)
       ifconfig lo:0 down
       route del $VIP1 >/dev/null 2>&1
       echo “0” >/proc/sys/net/ipv4/conf/lo/arp_ignore
       echo “0” >/proc/sys/net/ipv4/conf/lo/arp_announce
       echo “0” >/proc/sys/net/ipv4/conf/all/arp_ignore
       echo “0” >/proc/sys/net/ipv4/conf/all/arp_announce
       echo “RealServer Stoped”
       ;;
*)
       echo “Usage: $0 {start|stop}”
       exit 1
esac

exit 0


#执行realserver.sh start开启,stop关闭

#脚本设置成755权限,并放入rc.local下让其开机启动运行


六、启动keepalived服务及查看相关信息

214.85214.86上分别启动keepalived服务

 

214.85keepalived主机上查看信息

[root@test85 ~]# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu65536 qdisc noqueue state UNKNOWN

   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

   inet 127.0.0.1/8 scope host lo

   inet6 ::1/128 scope host

      valid_lft forever preferred_lft forever

2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000

   link/ether 00:0c:29:85:7a:67 brd ff:ff:ff:ff:ff:ff

   inet 192.168.214.85/24 brd 192.168.214.255 scope global eth0

    inet 192.168.214.89/32 scope global eth0

   inet6 fe80::20c:29ff:fe85:7a67/64 scope link

      valid_lft forever preferred_lft forever


214.85上查看日志信息,看到已成功进入keepalived主机模式

[root@test85 ~]# tail -f /var/log/messages

May 4 14:12:34 test85 Keepalived_vrrp[7977]: VRRP_Instance(VI_1) Entering MASTER STATE

May 4 14:12:34 test85 Keepalived_vrrp[7977]: VRRP_Instance(VI_1) settingprotocol VIPs.

May 4 14:12:34 test85 Keepalived_healthcheckers[7975]: Netlink reflector reports IP 192.168.214.89 added

May 4 14:12:34 test85 Keepalived_vrrp[7977]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

May 4 14:12:34 test85 Keepalived_vrrp[7977]: VRRP_Group(VG1) Syncinginstances to MASTER state

May 4 14:12:36 test85 ntpd[1148]: Listen normally on 7 eth0 192.168.214.89UDP 123

May 4 14:12:36 test85 ntpd[1148]: peers refreshed

May 4 14:12:39 test85 Keepalived_vrrp[7977]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

May 4 14:12:40 test85 root[7924] 192.168.5.80 53823 192.168.214.85 22:#1525414360

May 4 14:12:40 test85 root[7924] 192.168.5.80 53823 192.168.214.85 22: ipaddr


214.86上查看日志信息,看到已成功进入keepalived备机模式

May 4 14:12:37 web86 Keepalived_vrrp[31009]: Using LinkWatch kernel netlinkreflector...

May 4 14:12:37 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Entering BACKUP STATE

May 4 14:12:37 web86 Keepalived_vrrp[31009]: VRRP sockpool: [ifindex(2),proto(112), unicast(0), fd(10,11)]

May 4 14:12:37 web86 Keepalived_healthcheckers[31007]: Opening file'/etc/keepalived/keepalived.conf'.

May 4 14:12:37 web86 Keepalived_healthcheckers[31007]: Configuration isusing : 14713 Bytes

May 4 14:12:37 web86 Keepalived_healthcheckers[31007]: Using LinkWatchkernel netlink reflector...

May 4 14:12:37 web86 Keepalived_healthcheckers[31007]: Activatinghealthchecker for service [192.168.214.87]:80

May 4 14:12:37 web86 Keepalived_healthcheckers[31007]: Activatinghealthchecker for service [192.168.214.88]:80


后端真实机启动脚本后,查看网卡信息,看到vip已成功绑定在回环口上。

[root@web87 ~]# ipaddr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu65536 qdisc noqueue state UNKNOWN

   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

   inet 127.0.0.1/8 scope host lo

    inet 192.168.214.89/32 brd 192.168.214.89 scope global lo:0

   inet6 ::1/128 scope host

      valid_lft forever preferred_lft forever

2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000

   link/ether 00:0c:29:38:31:ad brd ff:ff:ff:ff:ff:ff

   inet 192.168.214.87/24 brd 192.168.214.255 scope global eth0

   inet6 fe80::20c:29ff:fe38:31ad/64 scope link

      valid_lft forever preferred_lft forever


通过ipvsadm –L –n查看相应lvs连接信息
[root@test85 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.214.89:80 rr persistent 10
  -> 192.168.214.87:80            Route   100    2          2   
  -> 192.168.214.88:80            Route   100    0          0


七、keepalived测试


正常访问没问题后,我们来模拟lvs集群故障

May 4 14:35:34 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Transition to MASTER STATE

May 4 14:35:34 web86 Keepalived_vrrp[31009]: VRRP_Group(VG1) Syncinginstances to MASTER state

May 4 14:35:35 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) EnteringMASTER STATE

May 4 14:35:35 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) settingprotocol VIPs.

May 4 14:35:35 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

May 4 14:35:35 web86 Keepalived_healthcheckers[31007]: Netlink reflectorreports IP 192.168.214.89 added

May 4 14:35:36 web86 ntpd[1230]: Listen normally on 7 eth0 192.168.214.89UDP 123

May 4 14:35:36 web86 ntpd[1230]: peers refreshed

May 4 14:35:40 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89


然后,再把214.85主机恢复,由于214.85拥有较高的优先级,会从214.86抢回MASTER状态,相应的214.86会回归到原来的Backup状态


214.85日志记录,重新回到了MASTER状态

May 4 14:41:55 test85 Keepalived_vrrp[8066]: VRRP_Instance(VI_1) Transitionto MASTER STATE

May 4 14:41:55 test85 Keepalived_vrrp[8066]: VRRP_Instance(VI_1) Receivedlower prio advert, forcing new election

May 4 14:41:55 test85 Keepalived_vrrp[8066]: VRRP_Group(VG1) Syncinginstances to MASTER state

May 4 14:41:56 test85 Keepalived_vrrp[8066]: VRRP_Instance(VI_1) EnteringMASTER STATE

May 4 14:41:56 test85 Keepalived_vrrp[8066]: VRRP_Instance(VI_1) settingprotocol VIPs.

May 4 14:41:56 test85 Keepalived_vrrp[8066]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

May 4 14:41:56 test85 Keepalived_healthcheckers[8064]: Netlink reflectorreports IP 192.168.214.89 added

May 4 14:41:58 test85 ntpd[1148]: Listen normally on 8 eth0 192.168.214.89UDP 123

May 4 14:41:58 test85 ntpd[1148]: peers refreshed

May 4 14:42:01 test85 Keepalived_vrrp[8066]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89


218.86日志记录,接收到了高优先级请求,从之前的MASTER状态变回了BACKUP状态

May 4 14:35:34 web86 Keepalived_vrrp[31009]: VRRP_Group(VG1) Syncinginstances to MASTER state

May 4 14:35:35 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Entering MASTER STATE

May 4 14:35:35 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) settingprotocol VIPs.

May 4 14:35:35 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

May 4 14:35:35 web86 Keepalived_healthcheckers[31007]: Netlink reflectorreports IP 192.168.214.89 added

May 4 14:35:36 web86 ntpd[1230]: Listen normally on 7 eth0 192.168.214.89UDP 123

May 4 14:35:36 web86 ntpd[1230]: peers refreshed

May 4 14:35:40 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

May 4 14:36:41 web86 root[30963] 192.168.5.80 53824 192.168.214.86 22:#1525415801

May 4 14:36:41 web86 root[30963] 192.168.5.80 53824 192.168.214.86 22: ipaddr

May 4 14:41:55 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Received higher prio advert

May 4 14:41:55 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Entering BACKUP STATE

May 4 14:41:55 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) removingprotocol VIPs.

May 4 14:41:55 web86 Keepalived_vrrp[31009]: VRRP_Group(VG1) Syncinginstances to BACKUP state

May 4 14:41:55 web86 Keepalived_healthcheckers[31007]: Netlink reflectorreports IP 192.168.214.89 removed

May 4 14:41:56 web86 ntpd[1230]: Deleting interface #7 eth0,192.168.214.89#123, interface stats: received=0, sent=0, dropped=0,active_time=380 secs


最后,再模拟后端真实机214.87服务宕掉,看是否vip只请求214.88

通过日志查看得知,keepalived集群探测到后端真实机214.8780端口不通,把它从vip请求列表中移除了

 

May  414:48:00 test85 Keepalived_healthcheckers[8064]: TCP connection to[192.168.214.87]:80 failed !!!

May 4 14:48:00 test85Keepalived_healthcheckers[8064]: Removing service [192.168.214.87]:80 from VS[192.168.214.89]:80

 

 

当重新探测到后端真实机214.87服务恢复后,又把它加入了请求列表中

May 4 14:52:55 test85 Keepalived_healthcheckers[8064]: TCP connection to[192.168.214.87]:80 success.

May 4 14:52:55 test85 Keepalived_healthcheckers[8064]: Adding service[192.168.214.87]:80 to VS [192.168.214.89]:80




以上是关于LVS/DR + keepalived负载均衡实现的主要内容,如果未能解决你的问题,请参考以下文章

LVS/DR + keepalived负载均衡高可用实现

Keepalived+LVS(dr)高可用负载均衡集群的实现

RHEL 5.4下部署LVS(DR)+keepalived实现高性能高可用负载均衡

LVS DR模式负载均衡搭建keepalived高可用+LVS负载均衡配合

keepalived+LVS/DR HA负载均衡部署

Linux下部署LVS(DR)+keepalived+Nginx负载均衡