lvs+keepalived+httpd高可用集群

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了lvs+keepalived+httpd高可用集群相关的知识,希望对你有一定的参考价值。

实验环境

技术分享图片
(1)本次基于VMware Workstation搭建一个四台Linux(CentOS 7.4)系统所构成的一个服务器集群,其中两台负载均衡服务器(一台为主机,另一台为备机),另外两台作为真实的Web服务器(向外部提供http服务,这里仅仅使用了CentOS默认自带的http服务,没有安装其他的类似Tomcat、Jexus服务)。

  (2)本次实验基于DR负载均衡模式,设置了一个VIP(Virtual IP)为172.18.38.99,用户只需要访问这个IP地址即可获得网页服务。其中,负载均衡主机为172.18.38.100,备机为172.18.38.101。Web服务器A为172.18.38.200,Web服务器B为172.18.38.201。

实验准备

(1),绑定静态IP地址

    [[email protected] ~]# nmcli connection modify ens37 ipv4.addresses 172.18.38.100/16 ipv4.method manual connection.autoconnect yes 
    [[email protected] ~]# nmcli connection modify ens37 ipv4.addresses 172.18.38.101/16 ipv4.method manual connection.autoconnect yes 
    [[email protected] ~]# nmcli connection modify ens37 ipv4.addresses 172.18.38.200/16 ipv4.method manual connection.autoconnect yes 
    [[email protected] ~]# nmcli connection modify ens37 ipv4.addresses 172.18.38.201/16 ipv4.method manual connection.autoconnect yes 

(2),同步时间

利用chronyd服务同步时间

1,在四台主机上都执行下面的命令,注意时间服务器必须是一台


    1,修改配置文件
     vim /etc/chrony.conf
        # Please consider joining the pool (http://www.pool.ntp.org/join.html).
        server 172.18.0.1  iburst #指定一台服务器
    2,利用下面这条命令概略同步一下
      ntpdate 172.18.0.1 
     1 Apr 13:23:31 ntpdate[8517]: adjust time server 172.18.0.1 offset -0.350642 sec
    3,重启chrony服务
     systemctl restart chronyd.service 

(3),关闭防火墙

    systemctl disable firewalld
    systemctl stop firewalld

(3),LVS+keepalived主从配置

1,在两台lvs服务器下载keepalived

    yum  -y install keepalived

1,keepvlived_master+lvs配置文件

vim /etc/keepalived/keepalived.conf 

    ! Configuration File for keepalived

    global_defs {
       notification_email {
         [email protected]   #出问题了发邮件给谁
       }
       notification_email_from [email protected]
       smtp_server 127.0.0.1 #本机邮件服务器
       smtp_connect_timeout 30 #超时时间
       router_id proxy1  #服务器id
       vrrp_mcast_group4 224.1.1.1 #多播地址所有的keepalived会在这个地址通讯,决定谁工作
    }

    vrrp_instance VI_1 {
        state MASTER  #主服务器
        interface ens37 #网卡接口
        virtual_router_id 66 #服务器id
        priority 100  #优先级,
        advert_int 1 #心跳检测 1秒一次
        authentication { #验证
            auth_type PASS  #验证类型,基于密码
            auth_pass 123456 #密码
        }
        virtual_ipaddress {
            172.18.38.99/16 #共享IP
        }

    }
    virtual_server 172.18.38.99 80 {
        delay_loop 6  
        lb_algo rr  #调度算法
        lb_kind DR  #调度模型
        #persistence_timeout 50 
        protocol TCP 
        sorry_server 127.0.0.1 80 
        real_server 172.18.38.200 80 {  #后端server_IP 及端口
            weight 1  #权重
            HTTP_GET {  #健康检测方式
                url {
                  path /
                  status_code 200  
                }
                connect_timeout 3 #超时时间
                nb_get_retry 3 #重试次数
                delay_before_retry 3 #多长时间重试
            }
        }
        real_server 172.18.38.201 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
    }

2,keepalived_slave+lvs配置

vim /etc/keepalived/keepalived.conf

    ! Configuration File for keepalived

    global_defs {
       notification_email {
         [email protected]
       }
       notification_email_from [email protected]
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id proxy1
       vrrp_mcast_group4 224.1.1.1
    }

    vrrp_instance VI_2 {
        state SLAVE  "slave关键配置"
        interface ens37
        virtual_router_id 88
        priority 80
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 123456
        }
        virtual_ipaddress {
            172.18.38.99/16
        }

    }
    virtual_server 172.18.38.99 80 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        #persistence_timeout 50
        protocol TCP
        sorry_server 127.0.0.1 80
        real_server 172.18.38.200 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
        real_server 172.18.38.201 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
    }

3,在两台lvs服务器下载ipvsadm包

    yum install ipvsadm

4,在两台lvs主机上都启动keepalived服务

    systemctl start keepalived
    systemctl enable keepalived

5,注意这时候会在ens37网卡上加上一个新的IP

    [[email protected]_master ~]# ip a 
        ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:39:48:cc brd ff:ff:ff:ff:ff:ff
    inet 172.18.38.101/16 brd 172.18.255.255 scope global ens37
       valid_lft forever preferred_lft forever
    inet 172.18.38.99/16 scope global secondary ens37
       valid_lft forever preferred_lft forever
    inet6 fe80::a7b9:b100:4f55:480e/64 scope link 
       valid_lft forever preferred_lft forever

[[email protected]_slave  ~]# ip a 
        ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
            link/ether 00:50:56:39:48:cc brd ff:ff:ff:ff:ff:ff
            inet 172.18.38.101/16 brd 172.18.255.255 scope global ens37
                 valid_lft forever preferred_lft forever
            inet 172.18.38.99/16 scope global secondary ens37
                 valid_lft forever preferred_lft forever
            inet6 fe80::a7b9:b100:4f55:480e/64 scope link 
                 valid_lft forever preferred_lft forever

5,查看调度规则

    [[email protected]_master ~]# ipvsadm -ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  172.18.38.99:80 rr
      -> 172.18.38.200:80             Route   1      0          4         
      -> 172.18.38.201:80             Route   1      0          4    

    [[email protected]_slave  ~]# ipvsadm -ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  172.18.38.99:80 rr
      -> 172.18.38.200:80             Route   1      0          0         
      -> 172.18.38.201:80             Route   1      0          0       

(4),配置2个后端web服务器

1,安装httpd软件

    1,安装httpd软件
    yum install http
    2,启动服务
        systemctl start httpd
        systemctl enable httpd
    2,生成web页面,
        A主机
            echo web_server_A > /var/www/html/index.html
        b主机
            echo web_server_B > /var/www/html/index.html

2,在两个web服务器执行下面的脚本,会在lo网卡上生成一个vip

vim lvs_br_rs.sh
    #!/bin/bash
    #Author:wangxiaochun
    #Date:2017-08-13
    vip=172.18.38.99   "只需要修改一下IP即可"
    mask=‘255.255.255.255‘
    dev=lo:1

    case $1 in
    start)
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ifconfig $dev $vip netmask $mask #broadcast $vip up
        #route add -host $vip dev $dev
        echo "The RS Server is Ready!"
        ;;
    stop)
        ifconfig $dev down
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
        echo "The RS Server is Canceled!"
        ;;
    *)
        echo "Usage: $(basename $0) start|stop"
        exit 1
        ;;
    esac

3,查看lo网卡

    [[email protected]_server_A ~]# ip a 
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 172.18.38.99/32 scope global lo:1
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

    [[email protected]_server_B ~]# ip a 
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 172.18.38.99/32 scope global lo:1
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

客户端测试

1,不宕任何服务

    [[email protected] ~]# for i in {1..10};do sleep 0.5;curl 172.18.38.99;done
    web_server_B
    web_server_A
    web_server_B
    web_server_A
    web_server_B
    web_server_A
    web_server_B
    web_server_A
    web_server_B
    web_server_A

2,宕掉一台keepalived测试

    [[email protected] lvs_master ~]# systemctl stop keepalived.service

    [[email protected] ~]# for i in {1..10};do sleep 0.5;curl 172.18.38.99;done
    web_server_B
    web_server_A
    web_server_B
    web_server_A
    web_server_B
    web_server_A
    web_server_B
    web_server_A
    web_server_B
    web_server_A

测试通过

以上是关于lvs+keepalived+httpd高可用集群的主要内容,如果未能解决你的问题,请参考以下文章

LVS+keepalived+httpd高可用集群

lvs+keepalived+httpd高可用集群

Keepalived+LVS(dr)高可用负载均衡集群的实现

LVS+KeepAlived构建高可用集群

keepalived高可用

Keepalived+Nginx实现高可用负载均衡集群