实现lvs调度及lvs+keeplivead高可用
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了实现lvs调度及lvs+keeplivead高可用相关的知识,希望对你有一定的参考价值。
1.简述下lvs四种集群特点及使用场景
LVS 有三种负载均衡的模式,分别是VS/NAT(nat 模式),VS/DR(路由模式),VS/TUN(隧道模式),VS/FULLNAT
一、NAT模式(VS-NAT)
原理:就是把客户端发来的数据包的IP头的目的地址,在负载均衡器上换成其中一台RS的IP地址并发至此RS来处理,RS处理完后把数据交给负载均衡器,负载均衡器再把数据包原IP地址改为自己的IP将目的地址改为客户端IP地址即可期间,无论是进来的流量,还是出去的流量,都必须经过负载均衡器
优点:集群中的物理服务器可以使用任何支持TCP/IP操作系统,只有负载均衡器需要一个合法的IP地址
缺点:扩展性有限。当服务器节点(普通PC服务器)增长过多时,负载均衡器将成为整个系统的瓶颈
因为所有的请求包和应答包的流向都经过负载均衡器。当服务器节点过多时
大量的数据包都交汇在负载均衡器那,速度就会变慢!
二、IP隧道模式(VS-TUN)
原理:首先要知道,互联网上的大多Internet服务的请求包很短小,而应答包通常很大那么隧道模式就是,把客户端发来的数据包,封装一个新的IP头标记(仅目的IP)发给RSRS收到后,先把数据包的头解开,还原数据包,处理后,直接返回给客户端,不需要再经过负载均衡器。注意,由于RS需要对负载均衡器发过来的数据包进行还原,所以说必须支持IPTUNNEL协议,所以,在RS的内核中,必须编译支持IPTUNNEL这个选项
优点:负载均衡器只负责将请求包分发给后端节点服务器,而RS将应答包直接发给用户所以,减少了负载均衡器的大量数据流动,负载均衡器不再是系统的瓶颈,就能处理很巨大的请求量这种方式,一台负载均衡器能够为很多RS进行分发。而且跑在公网上就能进行不同地域的分发。
缺点:隧道模式的RS节点需要合法IP,这种方式需要所有的服务器支持”IP Tunneling”(IP Encapsulation)协议,服务器可能只局限在部分Linux系统上
三、直接路由模式(VS-DR)
原理:负载均衡器和RS都使用同一个IP对外服务但只有DR对ARP请求进行响应所有RS对本身这个IP的ARP请求保持静默也就是说,网关会把对这个服务IP的请求全部定向给DR而DR收到数据包后根据调度算法,找出对应的RS,把目的MAC地址改为RS的MAC(因为IP一致)并将请求分发给这台RS这时RS收到这个数据包,处理完成之后,由于IP一致,可以直接将数据返给客户则等于直接从客户端收到这个数据包无异,处理后直接返回给客户端
由于负载均衡器要对二层包头进行改换,所以负载均衡器和RS之间必须在一个广播域也可以简单的理解为在同一台交换机上
优点:和TUN(隧道模式)一样,负载均衡器也只是分发请求,应答包通过单独的路由方法返回给客户端与VSTUN相比,VS-DR这种实现方式不需要隧道结构,因此可以使用大多数操作系统做为物理服务器。
缺点:(不能说缺点,只能说是不足)要求负载均衡器的网卡必须与物理网卡在一个物理段上。
四、fullnat模式
lvs-fullnat:通过同时修改请求报文的源IP地址和目标IP地址进行转发
(1) VIP是公网地址,RIP和DIP是私网地址,且通常不在同一IP网络;因此,RIP的网关一般不会指向DIP
(2) RS收到的请求报文源地址是DIP,因此,只需响应给DIP;但Director还要将其发往Client
(3) 请求和响应报文都经由Director
(4) 支持端口映射
注意:此类型kernel默认不支持
2.描述LVS-DR工作原理,并配置实现
原理:负载均衡器和RS都使用同一个IP对外服务但只有DR对ARP请求进行响应所有RS对本身这个IP的ARP请求保持静默也就是说,网关会把对这个服务IP的请求全部定向给DR而DR收到数据包后根据调度算法,找出对应的RS,把目的MAC地址改为RS的MAC(因为IP一致)并将请求分发给这台RS这时RS收到这个数据包,处理完成之后,由于IP一致,可以直接将数据返给客户则等于直接从客户端收到这个数据包无异,处理后直接返回给客户端
由于负载均衡器要对二层包头进行改换,所以负载均衡器和RS之间必须在一个广播域也可以简单的理解为在同一台交换机上
规划:c1,c2,c3,c4都是centos7.6
c1 客户端
c2 vs
c3 web1
c4 web2
2.1 安装web服务器
[root@c3 ~]# yum install httpd -y
[root@c3 ~]# echo rs1 > /var/www/html/index.html
[root@c3 ~]# systemctl start httpd
[root@c3 ~]# curl c3
rs1
[root@c4 ~]# yum install httpd -y
[root@c4 ~]# echo rs2 > /var/www/html/index.html
[root@c4 ~]# systemctl start httpd
[root@c4 ~]# curl c4
rs2
2.2 配置rs服务器
[root@c3 ~]# yum install net-tools -y ###使用ifconfig命令需要安装net-tools包
[root@c3 ~]# cat rs.sh
#!/bin/bash
vip=10.0.0.100
mask=‘255.255.255.255‘
dev=lo:1
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig $dev $vip netmask $mask #broadcast $vip up
#route add -host $vip dev $dev
;;
stop)
ifconfig $dev down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
;;
*)
echo "Usage: $(basename $0) start|stop"
exit 1
;;
esac
[root@c3 ~]# sh rs.sh start
[root@c3 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 10.0.0.100/32 scope global lo:1
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:f1:37:a8 brd ff:ff:ff:ff:ff:ff
inet 10.1.1.244/24 brd 10.1.1.255 scope global noprefixroute dynamic eth0
valid_lft 14582sec preferred_lft 14582sec
inet6 fe80::5025:c937:77d0:2b28/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@c4 ~]# yum install net-tools -y
[root@c4 ~]# sh rs.sh start
[root@c4 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 10.0.0.100/32 scope global lo:1
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:05:32:f0 brd ff:ff:ff:ff:ff:ff
inet 10.1.1.245/24 brd 10.1.1.255 scope global noprefixroute dynamic eth0
valid_lft 16671sec preferred_lft 16671sec
inet6 fe80::96c3:3cc3:b39e:dee3/64 scope link noprefixroute
valid_lft forever preferred_lft forever
2.3 配置vs服务器
[root@c2 ~]# yum install -y ipvsadm ###安装lvs包
[root@c2 ~]# cat vs.sh
#!/bin/bash
vip=‘10.0.0.100‘
iface=‘lo:1‘
mask=‘255.255.255.255‘
port=‘80‘
rs1=‘10.1.1.244‘
rs2=‘10.1.1.245‘
scheduler=‘rr‘ ###为了测试容易出效果,采用rr轮询算法
type=‘-g‘
case $1 in
start)
ifconfig $iface $vip netmask $mask #broadcast $vip up
iptables -F
ipvsadm -A -t ${vip}:${port} -s $scheduler
ipvsadm -a -t ${vip}:${port} -r ${rs1} $type -w 1
ipvsadm -a -t ${vip}:${port} -r ${rs2} $type -w 1
;;
stop)
ipvsadm -C
ifconfig $iface down
;;
*)
echo "Usage $(basename $0) start|stop"
exit 1
esa
[root@c2 ~]# sh vs.sh start
[root@c2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 10.0.0.100/32 scope global lo:1
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
[root@c2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.0.0.100:80 rr
-> 10.1.1.244:80 Route 1 0 0
-> 10.1.1.245:80 Route 1 0 0
2.4 测试:
[root@c1 ~]# route -n ###无去往10.0.0.0网段的路由
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.1.1.254 0.0.0.0 UG 100 0 0 eth0
10.1.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
[root@c1 ~]# curl 10.0.0.100
^C
[root@c1 ~]# route add -host 10.0.0.100 dev eth0 ###增加去往10.0.0.0网段的路由
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
3.实现LVS+Keepalived高可用
keepalive使用vrrp(虚拟路由协议)实现负载冗余功能。路由器通过周期的组播通告宣称自己是主路由器,然后与网络中的路由器对比优先级,以选出主、备路由器。主路由器提供相对应的路由功能,备路由器在主路由器故障的时候再次对比优先级选出新的主路由器提供服务,其余的成为备份路由器。此场景在第2小节的基础上完成,c2是主路由器,c5是备路由器
3.1 实现c2与c5互为免密登录
[root@c2 ~]# ssh-keygen -t rsa -P "" ###生成密钥
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:C1wDPspULjsjJQs/hSUjac50V4BLQXCQFkwxbViT/DM root@c2
The key‘s randomart image is:
+---[RSA 2048]----+
|+O#*=.=. |
|.Oo&.= . |
|B * O + o |
| = O E o . |
| = * = S |
| o o . . |
| . |
| |
| |
+----[SHA256]-----+
[root@c2 ~]# ssh-copy-id c5 ###传输密钥
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host ‘c5 (10.1.1.246)‘ can‘t be established.
ECDSA key fingerprint is SHA256:ilZ46J85JC8Xhr2dVvYsUxMGyj17SDhD6/JrhmNy6GY.
ECDSA key fingerprint is MD5:2f:c5:a9:d6:d7:5f:5e:4e:c3:94:7c:92:3a:d2:55:63.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c5‘s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh ‘c5‘"
and check to make sure that only the key(s) you wanted were added.
[root@c2 ~]# ssh c5 ###测试免密登录
Last login: Mon May 25 21:40:03 2020 from 192.168.10.45
[root@c5 ~]#
[root@c5 ~]# ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:abVEGoN7+mbpGU0aZY4VssjdndMC+cjLZYK5Icy+S/U root@c5
The key‘s randomart image is:
+---[RSA 2048]----+
| .+ +. |
| ..o O.+ o |
| oo.++Bo= . |
| = =X+.+o |
| . +S++= |
| oo.*o |
| .oo.E |
| .. =o |
| .=o |
+----[SHA256]-----+
[root@c5 ~]# ssh-copy-id c2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host ‘c2 (10.1.1.243)‘ can‘t be established.
ECDSA key fingerprint is SHA256:dldJTKtxApZyQT/FT6WKQsqKgtf4cPuAxBTiLMFdxSk.
ECDSA key fingerprint is MD5:1a:07:07:69:3f:0e:94:b3:f3:c5:04:dc:73:6b:ba:3e.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c2‘s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh ‘c2‘"
and check to make sure that only the key(s) you wanted were added.
[root@c5 ~]# ssh c2
Last login: Mon May 25 21:40:01 2020 from 192.168.10.45
[root@c2 ~]#
3.2 安装与配置keepalived
3.2.1 先清除c2上的ipvsadm策略
[root@c2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.0.0.100:80 rr
-> 10.1.1.244:80 Route 1 0 0
-> 10.1.1.245:80 Route 1 0 0
[root@c2 ~]# ls
anaconda-ks.cfg original-ks.cfg vs.sh
[root@c2 ~]# sh vs.sh stop
[root@c2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
3.2.2 安装keepalived服务,ipvsadm工具
[root@c2 ~]# yum install keepalived.x86_64 -y
[root@c5 ~]# yum install keepalived -y
[root@c5 ~]# yum install ipvsadm -y
[root@c2 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.100.100
}
vrrp_instance VI_1 {
state MASTER
interface bond0
virtual_router_id 5
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
10.0.0.100/24 dev bond0 label bond0:0
}
}
virtual_server 10.0.0.100 80 {
delay_loop 1
lb_algo rr
lb_kind DR
protocol TCP
real_server 10.1.1.244 80 {
weight 1 ###后端服务检测
HTTP_GET {
url {
path /
status_code 200
}
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
real_server 10.1.1.245 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@c2 keepalived]# systemctl start keepalived
[root@c2 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
link/ether 00:0c:29:ba:03:94 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
link/ether 00:0c:29:ba:03:9e brd ff:ff:ff:ff:ff:ff
7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:0c:29:ba:03:94 brd ff:ff:ff:ff:ff:ff
inet 10.1.1.243/24 brd 10.1.1.255 scope global noprefixroute bond0
valid_lft forever preferred_lft forever
inet 10.0.0.100/24 scope global bond0:0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:feba:394/64 scope link
valid_lft forever preferred_lft forever
[root@c5 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node2
vrrp_mcast_group4 224.0.100.100
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 5
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
10.0.0.100/24 dev eth0 label eth0:0
}
}
virtual_server 10.0.0.100 80 {
delay_loop 1
lb_algo rr
lb_kind DR
protocol TCP
real_server 10.1.1.244 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
real_server 10.1.1.245 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@c5 keepalived]# systemctl start keepalived.service
3.3 测试
3.3.1 先测试lvs是否正常工作
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]#
3.3.2 停掉c2上的keepalived服务,再测试是否正常调度
[root@c2 keepalived]# systemctl stop keepalived
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
3.3.3 测试后端服务器的健康性检查
[root@c3 ~]# systemctl stop httpd
[root@c1 ~]# while true;do curl 10.0.0.100;sleep 1;done
rs2
rs1
rs2
rs1
rs2
curl: (7) Failed connect to 10.0.0.100:80; Connection refused
rs2
curl: (7) Failed connect to 10.0.0.100:80; Connection refused
rs2
rs2
rs2
rs2
rs2
rs2
rs2
[root@c3 ~]# systemctl start httpd
[root@c1 ~]# while true;do curl 10.0.0.100;sleep 1;done
rs2
rs2
rs2
rs2
rs1
rs2
rs1
rs2
rs1
rs2
rs1
rs2
rs1
以上是关于实现lvs调度及lvs+keeplivead高可用的主要内容,如果未能解决你的问题,请参考以下文章