Keepalived+LVS实现高可用负载均衡双主模式
Posted uestc2007
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Keepalived+LVS实现高可用负载均衡双主模式相关的知识,希望对你有一定的参考价值。
LVS是一种集群(Cluster)技术:采用IP负载均衡技术和基于内容请求分发技术。调度器具有很好的吞吐率,将请求均衡地转移到不同的服务器上执行,且调度器自动屏蔽掉服务器的故障,从而将一组服务器构成一个高性能的、高可用的虚拟服务器。整个服务器集群的结构对客户是透明的,而且无需修改客户端和服务器端的程序。工作在四层,在内核空间工作,基于ipvs模块,不占用流量。
双机高可用方法目前分为两种:
1)双机主从模式:即前端使用两台服务器,一台主服务器和一台热备服务器,正常情况下,主服务器绑定一个公网虚拟IP,提供负载均衡服务,热备服务器处于空闲状态;当主服务器发生故障时,热备服务器接管主服务器的公网虚拟IP,提供负载均衡服务;但是热备服务器在主机器不出现故障的时候,永远处于浪费状态,对于服务器不多的网站,该方案不经济实惠。
2)双机主主模式:这种模式的效果很强大,即前端使用两台负载均衡服务器,互为主备,且都处于活动状态(这样达到不浪费服务器),同时各自绑定一个公网虚拟IP,提供负载均衡服务;当其中一台发生故障时,另一台接管发生故障服务器的公网虚拟IP(这时由非故障机器一台负担所有的请求)。这种方案,经济实惠,非常适合于当前架构环境。
一、环境介绍:
操作系统:
[[email protected]CentOS-4 ~]# cat /etc/RedHat-release
CentOS release 6.9 (Final)
服务器对应关系:
KA1:192.168.5.129 centos-1
KA2:192.168.5.128 centos-4
Vip1:192.168.5.200 129master/128backup
VIP2:192.168.5.210 128master/129backup
Web1:192.168.5.131 centos-2
Web2:192.168.5.132 centos-3
Client:192.168.5.140centos-5
二、环境安装:
安装依赖:
(在KA1和KA2机器上执行以下步骤)
[[email protected] ~]# yum -y install gcc pcre-devel zlib-devel openssl-devel
[[email protected]~]# cd /usr/local/src/
[[email protected] src]# wget http://nginx.org/download/nginx-1.9.7.tar.gz
安装nginx
[[email protected] src]# tar -zvxfnginx-1.9.7.tar.gz
[[email protected] src]# cd nginx-1.9.7
[[email protected] nginx-1.9.7]#./configure --prefix=/usr/local/nginx --user=nginx --group=nginx--with-http_ssl_module --with-http_flv_module --with-http_stub_status_module--with-http_gzip_static_module --with-pcre
[[email protected] nginx-1.9.7]# make &&make install
[[email protected] ~]# yum install -ykeepalived
[[email protected] ~]# yum install –y ipvsadm
(在web1服务器和web2服务器上安装nginx)
[[email protected]~]# yum -y install gcc pcre-devel zlib-devel openssl-devel
[[email protected]~]# cd /usr/local/src/
[[email protected] src]# wget http://nginx.org/download/nginx-1.9.7.tar.gz
安装nginx
[[email protected] src]# tar -zvxfnginx-1.9.7.tar.gz
[[email protected] src]# cd nginx-1.9.7
[[email protected] nginx-1.9.7]# ./configure --prefix=/usr/local/nginx --user=nginx --group=nginx--with-http_ssl_module --with-http_flv_module --with-http_stub_status_module--with-http_gzip_static_module --with-pcre
[[email protected] nginx-1.9.7]# make &&make install
三、配置服务:
(所以服务器上配置)
[[email protected] ~]# cat/etc/sysconfig/selinux
SELINUX=disabled
[[email protected] ~]# getenforce
Disabled
[[email protected] ~]# service iptables stop
1、配置keepalived:
(KA1上操作)
[[email protected] ~]#cat /etc/keepalived/keepalived.conf
! Configuration File forkeepalived
global_defs {
notification_email {
}
router_id LVS_DEVEL
}
vrrp_script chk_http_port {
script "/opt/check_nginx.sh"
interval 2
weight -5
fall 2
rise 1
}
vrrp_instance VI_1{
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.5.200
}
}
vrrp_instance VI_2{
state BACKUP
interface eth0
virtual_router_id 50
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.5.210
}
}
track_script {
chk_http_port
}
}
virtual_server192.168.5.200 80 { # 定义转移ip端口80的集群服务
delay_loop 3
lb_algo rr
lb_kind DR
protocol TCP
sorry_server 127.0.0.1 80
real_server 192.168.5.131 80 { # 定义集群服务包含的RS 1
weight 1 # 权重为1
HTTP_GET { # 定义RS1的健康状态检测
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.5.132 80 { # 定义集群服务包含的RS 2
weight 1 # 权重为1
HTTP_GET { # 定义RS2的健康状态检测
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
virtual_server 192.168.5.21080 { # 定义转移ip端口80的集群服务
delay_loop 3
lb_algo rr
lb_kind DR
protocol TCP
sorry_server 127.0.0.1 80
real_server 192.168.5.131 80 { # 定义集群服务包含的RS 1
weight 1 # 权重为1
HTTP_GET { # 定义RS1的健康状态检测
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.5.132 80 { # 定义集群服务包含的RS 2
weight 1 # 权重为1
HTTP_GET { # 定义RS2的健康状态检测
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
(KA2上操作)
[[email protected] ~]# cat/etc/keepalived/keepalived.conf
! Configuration File forkeepalived
global_defs {
notification_email {
}
router_id LVS_DEVEL
}
vrrp_script chk_http_port {
script "/opt/check_nginx.sh"
interval 2
weight -5
fall 2
rise 1
}
vrrp_instance VI_1{
state BACKUP
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.5.200
}
}
vrrp_instance VI_2{
state MASTER
interface eth0
virtual_router_id 50
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.5.210
}
}
track_script {
chk_http_port
}
}
virtual_server192.168.5.200 80 { # 定义转移ip端口80的集群服务
delay_loop 3
lb_algo rr
lb_kind DR
protocol TCP
sorry_server 127.0.0.1 80
real_server 192.168.5.131 80 { # 定义集群服务包含的RS 1
weight 1 # 权重为1
HTTP_GET { # 定义RS1的健康状态检测
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.5.132 80 { # 定义集群服务包含的RS 2
weight 1 # 权重为1
HTTP_GET { # 定义RS2的健康状态检测
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
virtual_server192.168.5.210 80 { # 定义转移ip端口80的集群服务
delay_loop 3
lb_algo rr
lb_kind DR
protocol TCP
sorry_server 127.0.0.1 80
real_server 192.168.5.131 80 { # 定义集群服务包含的RS 1
weight 1 # 权重为1
HTTP_GET { # 定义RS1的健康状态检测
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.5.132 80 { # 定义集群服务包含的RS 2
weight 1 # 权重为1
HTTP_GET { # 定义RS2的健康状态检测
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
编写一个监控nginx的脚本:
需要注意的是,要判断本机nginx是否正常,如果发现nginx不正常,重启之后,等待三秒在校验,任然失败则不尝试,关闭keepalived,发送邮件,其他主机此时接管VIP;
[[email protected]~]# cat /opt/check_nginx.sh
#!/bin/bash
check=$(ps-C nginx --no-heading | wc -l)
IP=`ipadd | grep eth0 | awk ‘NR==2{print $2}‘| awk -F ‘/‘ ‘{print $1}‘`
if ["${check}" = "0" ]; then
/usr/local/nginx/sbin/nginx
sleep 2
counter=$(ps -C nginx --no-heading|wc -l)
if [ "${check}" = "0"]; then
/etc/init.d/keepalived stop
echo "check $IP nginx is down"| mail -s "check keepalived nginx" *********@qq.com
fi
fi
(KA1一样的监控脚本)
2、在后端两台web服务器上配置vip默认路由和配置两台服务器的nginx(这就不演示怎样配置nginx了。):
(考虑到方便执行就编写了一个脚本:在web1和web2服务器上配置。)
[[email protected] ~]# cat lvs.sh
#!/bin/bash
#realserver config vip config route arp
#legehappy
Vip1=192.168.5.200
Vip2=192.168.5.210
source /etc/rc.d/init.d/functions
case $1 in
start)
echo"config vip route arp" > /tmp/lvs1.txt
/sbin/ifconfiglo:0 $Vip1 broadcast $Vip1 netmask 255.255.255.255 up
/sbin/ifconfiglo:1 $Vip2 broadcast $Vip2 netmask 255.255.255.255 up
echo"1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo"2" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo"1" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo"2" > /proc/sys/net/ipv4/conf/all/arp_announce
routeadd -host $Vip1 dev lo:0
routeadd -host $Vip2 dev lo:1
;;
stop)
echo "deletevip route arp" > /tmp/lvs2.txt
/sbin/ifconfig lo:0 down
echo"0" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo"0" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo"0" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo"0" > /proc/sys/net/ipv4/conf/all/arp_announce
routedel -host $Vip1 dev lo:0
routedel -host $Vip2 dev lo:1
;;
*)
echo"Usage: $0 (start | stop)"
exit 1
esac
(两台后端配置web服务nginx的页面信息)
[[email protected] ~]# curl 192.168.5.131
10.2
[[email protected] ~]# curl 192.168.5.132
10.3
3、在两台前端服务器上启动keepalived服务,对于192.168.5.200的vip centos-1是master/192.168.5.210的vip centos-1是backup。
[[email protected] ~]#service keepalived start
[[email protected] ~]# service keepalived start
查看日志文件:
[[email protected] ~]# cat /var/log/messages
Oct 19 22:00:22 centos-1 Keepalived_vrrp[46184]: VRRP_Instance(VI_2)Sending gratuitous ARPs on eth0 for 192.168.5.210
Oct 19 22:00:22 centos-1 Keepalived_healthcheckers[46183]: Netlinkreflector reports IP 192.168.5.210 added
Oct 19 22:00:24 centos-1 Keepalived_vrrp[46184]: VRRP_Instance(VI_1)Sending gratuitous ARPs on eth0 for 192.168.5.200
Oct 19 22:00:27 centos-1 Keepalived_vrrp[46184]: VRRP_Instance(VI_2)Sending gratuitous ARPs on eth0 for 192.168.5.210
(因为KA1先启动keepalived服务所以两个vip都会在KA1上,但第二台keepaliver服务起来后vip2就会被KA2抢占回来。)
[[email protected] ~]# cat /var/log/messages
Oct 19 22:01:38 centos-4 Keepalived_healthcheckers[15009]: Netlinkreflector reports IP 192.168.5.210 added
Oct 19 22:01:38 centos-4 avahi-daemon[1513]: Registering new addressrecord for 192.168.5.210 on eth0.IPv4.
Oct 19 22:01:38 centos-4 Keepalived_vrrp[15010]: VRRP_Instance(VI_2)Sending gratuitous ARPs on eth0 for 192.168.5.210
Oct 19 22:01:43 centos-4 Keepalived_vrrp[15010]: VRRP_Instance(VI_2)Sending gratuitous ARPs on eth0 for 192.168.5.210
查看ip addr:
[[email protected] keepalived]# ip add
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscpfifo_fast state UP qlen 1000
link/ether00:0c:29:0d:f3:5d brd ff:ff:ff:ff:ff:ff
inet 192.168.5.129/24 brd192.168.5.255 scope global eth0
inet 192.168.5.200/32scope global eth0
[[email protected] keepalived]#ip addr
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscpfifo_fast state UP qlen 1000
link/ether00:50:56:3a:84:30 brd ff:ff:ff:ff:ff:ff
inet 192.168.5.128/24 brd192.168.5.255 scope global eth0
inet 192.168.5.210/32 scope global eth0
(两台KA1和KA2服务器重启nginx、keepalived服务)
[[email protected]~]# /usr/local/nginx/sbin/nginx -t
nginx:the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx:configuration file /usr/local/nginx/conf/nginx.conf test is successful ###检查配置文件没问题后再执行重启nginx。
[[email protected]~]# /usr/local/nginx/sbin/nginx -s reload
[[email protected]~]# /usr/local/nginx/sbin/nginx -t
nginx:the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx:configuration file /usr/local/nginx/conf/nginx.conf test is successful
[[email protected]~]# /usr/local/nginx/sbin/nginx -s reload
[[email protected]~]# service keepalived restart
停止keepalived: [确定]
正在启动keepalived: [确定]
[[email protected]~]# service keepalived restart
停止keepalived: [确定]
正在启动keepalived: [确定]
四、测试:
验证方法(保证从负载均衡器本机到后端真实服务器之间能正常通信):
(1)、先测试完成后的效果访问vip1、vip2
Vip1:
[[email protected]~]# curl 192.168.5.200
10.2
[[email protected]~]# curl 192.168.5.200
10.3
[[email protected]~]# curl 192.168.5.200
10.2
[[email protected]~]# curl 192.168.5.200
10.3
Vip2:
[[email protected]~]# curl 192.168.5.210
10.3
[[email protected]~]# curl 192.168.5.210
10.2
[[email protected]~]# curl 192.168.5.210
10.3
[[email protected]~]# curl 192.168.5.210
10.2
(2)、把KA1keepalived stop掉(模拟KA1主机的keepalived故障)
[[email protected] ~]# service keepalived stop
停止 keepalived:
[[email protected] ~]# ip addr
2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000
link/ether 00:0c:29:0d:f3:5d brd ff:ff:ff:ff:ff:ff
inet 192.168.5.129/24 brd 192.168.5.255 scope global eth0
inet6 fe80::20c:29ff:fe0d:f35d/64 scope link
valid_lft forever preferred_lft forever
(KA1主机上查看ip addr已经没有vip了。)
在KA2主机上查看日志文件:
[[email protected] ~]# cat /var/log/messages
Oct 19 23:20:46 centos-4Keepalived_vrrp[15412]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for192.168.5.200
Oct 19 23:20:46 centos-4avahi-daemon[1513]: Registering new address record for 192.168.5.200 oneth0.IPv4.
Oct 19 23:20:46 centos-4Keepalived_healthcheckers[15411]: Netlink reflector reports IP 192.168.5.200added
Oct 19 23:20:51 centos-4Keepalived_vrrp[15412]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for192.168.5.200
(日志文件显示已经把vip:192.168.5.200接管了)
查看KA2主机的ip addr
[[email protected] ~]# ip addr
2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000
link/ether 00:50:56:3a:84:30 brd ff:ff:ff:ff:ff:ff
inet 192.168.5.128/24 brd 192.168.5.255 scope global eth0
inet 192.168.5.210/32 scope global eth0
inet 192.168.5.200/32 scope global eth0
(可以看到已经有两个vip)
检查nginx服务是否被KA2接管且不中断
[[email protected]~]# curl 192.168.5.200
10.3
[[email protected]~]# curl 192.168.5.200
10.2
[[email protected]~]# curl 192.168.5.210
10.3
[[email protected]~]# curl 192.168.5.210
10.2
以上是关于Keepalived+LVS实现高可用负载均衡双主模式的主要内容,如果未能解决你的问题,请参考以下文章
双主模型高可用负载均衡集群的实现(keepalived+lvs-dr)
Nginx安装 配置反向代理 负载均衡 upstream ssl证书提供https访问 ha nginx keepalived双主热备 LVS实现高可用负载 Keepalived+Lvs+Nginx