Linux学习之路-集群及LVS25---20180217
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Linux学习之路-集群及LVS25---20180217相关的知识,希望对你有一定的参考价值。
一、ipvs scheduler
ipvs scheduler:根据其调度时是否考虑各RS当前的负载状态
有两种方法:静态方法和动态方法
1、静态方法
仅根据算法本身进行调度
1、RR:roundrobin,轮询
2、WRR:Weighted RR,加权轮询
3、SH:Source Hashing
实现session sticky,源IP地址hash;将来自于同一个IP地址的请求始终发往第一次挑中的RS,从而实现会话绑定
4、DH:Destination Hashing
目标地址哈希,将发往同一个目标地址的请求始终转发至第一次挑中的RS,典型使用场景是正向代理缓存场景中的负载均衡,如:宽带运营商
2、动态方法
主要根据每RS当前的负载状态及调度算法进行调度Overhead=value 较小的RS将被调度
1、LC:least connections 适用于长连接应用
Overhead=activeconns)*256+inactiveconns
Overhead(负载值),activeconns活动链接、当前正在连并有数据通讯的链接个数,inactiveconns非活动链接、连接上了但没有数据通讯
2、WLC:Weighted LC,LVS默认调度方法
Overhead=(activeconns*256+inactiveconns)/weight
存在的弊端就是刚开始的时候,这时候weight没有发挥出应有的效果
3、SED:Shortest Expection Delay,初始连接高权重优先
Overhead=(activeconns+1)*256/weight
存在的弊端就是,RS直接weight差别巨大的时候,weight数值大的,负载要多承受的过多
4、NQ:Never Queue,第一轮均匀分配,后续SED
5、LBLC:Locality-Based LC,动态的DH算法,使用场景:根据(后端)负载状态实现正向代理(负载大就不进行随机调度)
6、LBLCR:LBLC with Replication,带复制功能的LBLC
解决LBLC负载不均衡问题,从负载重的复制到负载轻的RS
二、ipvs
ipvsadm/ipvs
1、ipvs
grep -i -C 10 "ipvs" /boot/config-VERSION-RELEASE.x86_64
ipvs集群:
管理集群服务
管理服务上的RS
程序包:ipvsadm
Unit File: ipvsadm.service
主程序:/usr/sbin/ipvsadm
规则保存工具:/usr/sbin/ipvsadm-save
规则重载工具:/usr/sbin/ipvsadm-restore
配置文件:/etc/sysconfig/ipvsadm-config
[[email protected]~]#yum install ipvsadm ========================================================================================== Package Arch Version Repository Size ========================================================================================== Installing: ipvsadm x86_64 1.27-7.el7 base 45 k Transaction Summary ========================================================================================== [[email protected]~]$grep -i -A 15 ipvs /boot/config-3.10.0-693.17.1.el7.x86_64 #调度支持的协议 CONFIG_IP_VS_PROTO_TCP=y CONFIG_IP_VS_PROTO_UDP=y CONFIG_IP_VS_PROTO_AH_ESP=y CONFIG_IP_VS_PROTO_ESP=y CONFIG_IP_VS_PROTO_AH=y CONFIG_IP_VS_PROTO_SCTP=y #调度支持的算法 CONFIG_IP_VS_RR=m CONFIG_IP_VS_WRR=m CONFIG_IP_VS_LC=m CONFIG_IP_VS_WLC=m CONFIG_IP_VS_LBLC=m CONFIG_IP_VS_LBLCR=m CONFIG_IP_VS_DH=m CONFIG_IP_VS_SH=m CONFIG_IP_VS_SED=m CONFIG_IP_VS_NQ=m [[email protected]~]#rpm -qi ipvsadm ipvsadm用于设置,维护和检查Linux内核中的虚拟服务器表。 Linux虚拟服务器可用于构建基于两个或更多节点群集的可扩展网络服务。 群集的活动节点将服务请求重定向到将实际执行服务的一组服务器主机。 支持的功能包括: - 两个传输层(第四层)协议(TCP和UDP) - 三种数据包转发方法(NAT,隧道和直接路由) - 八种负载均衡算法(循环法,加权循环,最小连接,加权最小连接,基于局部的最小连接,基于局部的复制最小连接,目标散列和源散列) [[email protected]~]#rpm -ql ipvsadm /etc/sysconfig/ipvsadm-config /usr/lib/systemd/system/ipvsadm.service /usr/sbin/ipvsadm /usr/sbin/ipvsadm-restore #读取配置命令 /usr/sbin/ipvsadm-save #保存配置命令 /usr/share/doc/ipvsadm-1.27 /usr/share/doc/ipvsadm-1.27/README /usr/share/man/man8/ipvsadm-restore.8.gz /usr/share/man/man8/ipvsadm-save.8.gz /usr/share/man/man8/ipvsadm.8.gz
二、ipvsadm命令
核心功能:
集群服务管理(VS):增、删、改、
集群服务的RS管理:增、删、改
查看
ipvsadm -A|E -t|u|f service-address [-s scheduler] [-p [timeout]] [-M netmask] [--pe persistence_engine] [-b sched-flags]
ipvsadm -D -t|u|f service-address 删除
ipvsadm –C 清空
ipvsadm –R 重载
ipvsadm -S [-n] 保存
ipvsadm -a|e -t|u|f service-address -r server-address [options]
ipvsadm -d -t|u|f service-address -r server-address
ipvsadm -L|l [options]
ipvsadm -Z [-t|u|f service-address]
1、管理集群服务
增、改
ipvsadm -A|E -t|u|f service-address [-s scheduler](调度算法) [-p [timeout]]
删除
ipvsadm -D -t|u|f service-address
service-address
-t|u|f:
-t: TCP协议的端口,VIP:TCP_PORT
-u: UDP协议的端口,VIP:UDP_PORT
-f:firewall MARK,标记,一个数字
[-s scheduler]:指定集群的调度算法,默认为wlc
2、管理集群上的RS
增、改
ipvsadm -a|e -t|u|f service-address -r server-address [-g|i|m](LVS 3种模式) [-w weight]
删
ipvsadm -d -t|u|f service-address -r server-address
server-address:
rip[:port] 如省略port,不作端口映射
选项:
lvs类型:
-g: gateway, dr类型,默认
-i: ipip, tun类型
-m: masquerade, nat类型
-w weight:权重
3、查看及清空
清空定义的所有内容
ipvsadm –C
清空计数器
ipvsadm -Z [-t|u|f service-address]
查看
ipvsadm -L|l [options]
--numeric, -n:以数字形式输出地址和端口号
--exact:扩展信息,精确值
--connection,-c:当前IPVS连接输出
--stats:统计信息
--rate :输出速率信息
ipvs规则:/proc/net/ip_vs
ipvs连接:/proc/net/ip_vs_conn
[[email protected]~]#ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.1.100:80 wrr -> 172.18.68.103:80 Masq 3 0 0 -> 172.18.68.104:80 Masq 1 0 0 [[email protected]~]#ipvsadm -ln --stats IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes -> RemoteAddress:Port TCP 192.168.1.100:80 237 1383 1013 92061 113827 -> 172.18.68.103:80 173 1038 693 68681 79459 -> 172.18.68.104:80 64 345 320 23380 34368 [[email protected]~]#cat /proc/net/ip_vs_conn Pro FromIP FPrt ToIP TPrt DestIP DPrt State Expires PEName PEData TCP C0A80165 A3DE C0A80164 0050 AC124468 0050 TIME_WAIT 38 # 192.168.1.101 客户端端口号 VIP 80端口 RIP 内部端口 #删除RS服务器的操作 [[email protected]~]#ipvsadm -d -t 10.0.0.100:0 -r 172.18.68.104:0 [[email protected]~]#ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.100:0 rr persistent 360 -> 172.18.68.103:0 Route 1 0 0
4、保存及重载规则
保存:建议保存至/etc/sysconfig/ipvsadm
ipvsadm-save > /PATH/TO/IPVSADM_FILE
ipvsadm -S > /PATH/TO/IPVSADM_FILE
systemctl stop ipvsadm.service
重载:
ipvsadm-restore < /PATH/FROM/IPVSADM_FILE
ipvsadm -R < /PATH/FROM/IPVSADM_FILE
systemctl restart ipvsadm.service
[[email protected]~]#ipvsadm-save -n > /etc/sysconfig/ipvsadm #这个文件内没有记录,可能无法启动服务 [[email protected]~]#ipvsadm -C [[email protected]~]#ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn [[email protected]~]#service ipvsadm stop ipvsadm: Clearing the current IPVS table: [ OK ] ipvsadm: Unloading modules: [ OK ] [[email protected]~]#service ipvsadm start #启动的时候会自动读取etc下的记录文件 ipvsadm: Clearing the current IPVS table: [ OK ] ipvsadm: Applying IPVS configuration: [ OK ] [[email protected]~]#ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.1.100:80 wrr -> 172.18.68.103:80 Masq 3 0 0 -> 172.18.68.104:80 Masq 1 0 0
5、注意事项
负载均衡集群设计时要注意的问题
(1) 是否需要会话保持
(2) 是否需要共享存储
共享存储:NAS, SAN, DS(分布式存储)
数据同步:
lvs-nat:
设计要点:
(1) RIP与DIP在同一IP网络, RIP的网关要指向DIP
(2) 支持端口映射
(3) Director要打开核心转发功能
时间同步
#实验:实现NTP服务 #VS服务器设置 [[email protected]~]#yum install chrony [[email protected]~]#vim /etc/chrony.conf server 210.72.145.44 iburst # Allow NTP client access from local network. allow 192.168.1.0/24 # Serve time even if not synchronized to any NTP server. local stratum 10 [[email protected]~]#service chronyd start [[email protected]~]#chkconfig chronyd on #RS服务器设置 [[email protected]~]#vim /etc/chrony.conf server 192.168.1.100 iburst #同步有一定的延迟 210 Number of sources = 1 .-- Source mode '^' = server, '=' = peer, '#' = local clock. / .- Source state '*' = current synced, '+' = combined , '-' = not combined, | / '?' = unreachable, 'x' = time may be in error, '~' = time too variable. || .- xxxx [ yyyy ] +/- zzzz || Reachability register (octal) -. | xxxx = adjusted offset, || Log2(Polling interval) --. | | yyyy = measured offset, || \ | | zzzz = estimated error. || | | MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^? 172.18.68.103 0 6 0 - +0ns[ +0ns] +/- 0ns
三、LVS高可用性
LVS的ipvs服务器没有任何的健康性检测功能
1 Director不可用,整个系统将不可用;SPoF Single Point of Failure
解决方案:高可用
keepalived(轻量化) heartbeat/corosync(重量级)
2 某RS不可用时,Director依然会调度请求至此RS
解决方案: 由Director对各RS健康状态进行检查,失败时禁用,成功时启用
keepalived heartbeat/corosync
ldirectord(本身就带ipvsadm的命令配置)
检测方式:
(a) 网络层检测,icmp
(b) 传输层检测,端口探测
(c) 应用层检测,请求某关键资源
RS全不用时:backup server, sorry server
官方地址:http://horms.net/projects/ldirectord/
1、ldirectord
ldirectord:监控和控制LVS守护进程,可管理LVS规则
包名:ldirectord-3.9.6-0rc1.1.1.x86_64.rpm
文件:
/etc/ha.d/ldirectord.cf 主配置文件
/usr/share/doc/ldirectord-3.9.6/ldirectord.cf 配置模版
/usr/lib/systemd/system/ldirectord.service 服务
/usr/sbin/ldirectord 主程序
/var/log/ldirectord.log 日志
/var/run/ldirectord.ldirectord.pid pid文件
2、Ldirectord 配置文件示例
checktimeout=3 #健康性检查的超时时间,3秒不回应就认为RS服务器不服务了 checkinterval=1 #探测时间 #fallback=127.0.0.1:80 #道歉服务器的地址 autoreload=yes #自动加载配置文件 logfile="/var/log/ldirectord.log" #日志文件 quiescent=no #down时yes权重为0,no为删除 #logfile="local0" #日志的级别 #emailalert="[email protected]" #发送邮件通知 #emailalertfreq=3600 #emailalertstatus=all quiescent=no virtual=5 #指定VS的FWM或IP:port real=172.16.0.7:80 gate 2 real=172.16.0.8:80 gate 1 fallback=127.0.0.1:80 gate #sorry server service=http scheduler=wrr checktype=negotiate checkport=80 request="index.html" receive="Test Ldirectord" [[email protected]~]#echo Sorry Server > /app/website/index.html [[email protected]~]#curl 10.0.0.100 Sorry Server
3、安装 Ldirectord
[[email protected]~]#yum install ldirectord-3.9.6-0rc1.1.1.x86_64.rpm Dependencies Resolved ============================================================================================================ Package Arch Version Repository Size ============================================================================================================ Installing: ldirectord x86_64 3.9.6-0rc1.1.1 /ldirectord-3.9.6-0rc1.1.1.x86_64 191 k Installing for dependencies: cifs-utils x86_64 4.8.1-20.el6 base 65 k keyutils x86_64 1.4-5.el6 base 39 k nfs-utils x86_64 1:1.2.3-75.el6 base 336 k nfs-utils-lib x86_64 1.1.5-13.el6 base 71 k perl-IO-Socket-INET6 noarch 2.56-4.el6 base 17 k perl-MailTools noarch 2.04-4.el6 base 101 k perl-Net-SSLeay x86_64 1.35-10.el6_8.1 base 174 k perl-Socket6 x86_64 0.23-4.el6 base 27 k perl-TimeDate noarch 1:1.16-13.el6 base 37 k resource-agents x86_64 3.9.5-46.el6 base 389 k rpcbind x86_64 0.2.0-13.el6 base 51 k Transaction Summary ============================================================================================================ Install 12 Package(s) [[email protected]~]#rpm -ql ldirectord /etc/ha.d /etc/ha.d/resource.d /etc/ha.d/resource.d/ldirectord /etc/init.d/ldirectord /etc/logrotate.d/ldirectord /usr/lib/ocf/resource.d/heartbeat/ldirectord /usr/sbin/ldirectord /usr/share/doc/ldirectord-3.9.6 /usr/share/doc/ldirectord-3.9.6/COPYING /usr/share/doc/ldirectord-3.9.6/ldirectord.cf /usr/share/man/man8/ldirectord.8.gz
#实验:配置Ldirectord
#修改配置文件 [[email protected]~]#cp /usr/share/doc/ldirectord-3.9.6/ldirectord.cf /etc/ha.d/ # Global Directives checktimeout=3 checkinterval=1 fallback=127.0.0.1:80 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=no # Sample for an http virtual service virtual=10.0.0.100:80 real=172.18.68.103:80 gate 3 #DR模式 权重为 3 real=172.18.68.104:80 gate 1 service=http #服务软件 scheduler=wrr #persistent=600 #持久连接 #netmask=255.255.255.255 protocol=tcp # checktype=negotiate # checkport=80 #检查端口 request="test.html" #测试页面 receive="test" #测试内容 #增加测试页面 [[email protected]~]#echo test > /var/www/html/test.html [[email protected]~]#echo test > /var/www/html/test.html [[email protected]~]#service ldirectord start Starting ldirectord... success #自动添加的 ipvsadm 的设置 [[email protected]~]#ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.100:80 wrr -> 172.18.68.103:80 Route 3 0 0 -> 172.18.68.104:80 Route 1 0 0 #测试 [[email protected]~]#for i in {1..10} ; do curl 10.0.0.100 ; done RS1 RS1 RS1 RS2 [[email protected]~]#service httpd stop [[email protected]~]#ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.100:80 wrr -> 172.18.68.103:80 Route 3 0 0 [[email protected]~]#service httpd start [[email protected]~]#ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.100:80 wrr -> 172.18.68.103:80 Route 3 0 0 -> 172.18.68.104:80 Route 1 0 0 [[email protected]~]#service httpd stop [[email protected]~]#service httpd stop [[email protected]~]#ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.100:80 wrr -> 127.0.0.1:80 Local 1 0 0 [[email protected]~]#tail /var/log/ldirectord.log [Thu Mar 8 09:43:09 2018|ldirectord|21342] Deleted real server: 172.18.68.103:80 (10.0.0.100:80) [Thu Mar 8 09:43:35 2018|ldirectord|21342] Deleted real server: 172.18.68.104:80 (10.0.0.100:80) [Thu Mar 8 09:43:35 2018|ldirectord|21342] Added fallback server: 127.0.0.1:80 (10.0.0.100:80) (Weight set to 1) [Thu Mar 8 09:43:50 2018|ldirectord|21342] Resetting soft failure count: 172.18.68.103:80 (tcp:10.0.0.100:80) [Thu Mar 8 09:43:50 2018|ldirectord|21342] Added real server: 172.18.68.103:80 (10.0.0.100:80) (Weight set to 3) [Thu Mar 8 09:43:50 2018|ldirectord|21342] Deleted fallback server: 127.0.0.1:80 (10.0.0.100:80) [Thu Mar 8 09:43:56 2018|ldirectord|21342] Resetting soft failure count: 172.18.68.104:80 (tcp:10.0.0.100:80) [Thu Mar 8 09:43:56 2018|ldirectord|21342] Added real server: 172.18.68.104:80 (10.0.0.100:80) (Weight set to 1)
#实验:用 ldirectord 完成DR模式MARK调度
#VS服务器上添加MARK标签 [[email protected]~]#iptables -t mangle -A PREROUTING -d 10.0.0.100 -p tcp -m multiport --dports 80,443 -j MARK --set-mark 10 [[email protected]~]#iptables -nvL -t mangle Chain PREROUTING (policy ACCEPT 119 packets, 11983 bytes) pkts bytes target prot opt in out source destination 0 0 MARK tcp -- * * 0.0.0.0/0 10.0.0.100 multiport dports 80,443 MARK set 0xa #设置 ldirectord 配置 [[email protected]~]#vim /etc/ha.d/ldirectord.cf virtual=10 #这的10 是根据 mangle 表标签的定义的十六进制数 real=172.18.68.103 gate real=172.18.68.104 gate service=http scheduler=wrr #persistent=600 #netmask=255.255.255.255 #protocol=tcp checktype=negotiate checkport=80 request="test.html" receive="test" [[email protected]~]#ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 10 wrr #这的10就是根据配置中的 -> 172.18.68.103:0 Route 3 0 0 -> 172.18.68.104:0 Route 1 0 0 [[email protected]~]#for i in {1..10} ; do sleep 0.5 ; curl -k https://10.0.0.100 ; curl 10.0.0.100 ; done RS1 RS2 RS1 RS2
#实验:上诉实验,所有服务均可以被调度,现在只希望80端口和443端口被调度
#实验基础是DR模型,使用ldriectord服务 [[email protected]~]#ssh 10.0.0.100 The authenticity of host '10.0.0.100 (10.0.0.100)' can't be established. RSA key fingerprint is 67:c6:59:f8:69:2e:a2:9c:96:cf:72:40:61:51:9c:85. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.0.0.100' (RSA) to the list of known hosts. [email protected]'s password: Last login: Thu Mar 8 08:29:50 2018 from 172.18.0.1 [[email protected]~]#exit logout Connection to 10.0.0.100 closed. [[email protected]~]#iptables -A FORWARD -p tcp -m multiport --dports 80,443 -j ACCEPT #在中间的路由器上加iptables策略 [[email protected]~]#iptables -I FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT #只允许之前链接过的服务,以及响应的文件 [[email protected]~]#iptables -A FORWARD -j REJECT #其他所有穿过路由器的服务均拒绝 [[email protected]~]#iptables -nvL Chain INPUT (policy ACCEPT 22 packets, 1936 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 12 packets, 1648 bytes) pkts bytes target prot opt in out source destination [[email protected]~]#curl 10.0.0.100 RS2 [[email protected]~]#ssh 10.0.0.100 ssh: connect to host 10.0.0.100 port 22: Connection refused
以上是关于Linux学习之路-集群及LVS25---20180217的主要内容,如果未能解决你的问题,请参考以下文章