18.1 集群介绍 18.2 keepalived介绍 18.3/18.4/18.5 用keepalived配置高可用集群
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了18.1 集群介绍 18.2 keepalived介绍 18.3/18.4/18.5 用keepalived配置高可用集群相关的知识,希望对你有一定的参考价值。
- 18.6 负载均衡集群介绍 - 18.7 LVS介绍 - 18.8 LVS调度算法 - 18.9/18.10 LVS NAT模式搭建 - 扩展 - lvs 三种模式详解 http://www.it165.net/admin/html/201401/2248.html - lvs几种算法 http://www.aminglinux.com/bbs/thread-7407-1-1.html - 关于arp_ignore和 arp_announcehttp://www.cnblogs.com/lgfeng/archive/2012/10/16/2726308.html - lvs原理相关的 http://blog.csdn.net/pi9nc/article/details/23380589 # 18.6 负载均衡集群介绍 - 主流开源软件LVS、keepalived、haproxy、nginx等 - 其中LVS属于4层(网络OSI 7层模型),nginx属于7层,haproxy既可以认为是4层,也可以当做7层使用 - keepalived的负载均衡功能其实就是lvs - lvs这种4层的负载均衡是可以分发除80外的其他端口通信的,比如mysql的,而nginx仅仅支持http,https,mail,haproxy也支持MySQL这种 - 相比较来说,LVS这种4层的更稳定,能承受更多的请求,而nginx这种7层的更加灵活,能实现更多的个性化需求 # 18.7 LVS介绍 - LVS是由国人章文嵩开发,(开源) - 流行度不亚于apache的httpd,基于TCP/IP做的路由和转发,稳定性和效率很高 - LVS最新版本基于Linux内核2.6,有好多年不更新了 - LVS有三种常见的模式:NAT、DR、IP Tunnel - LVS架构中有一个核心角色叫做分发器(Load balance),它用来分发用户的请求,还有诸多处理用户请求的服务器(Real Server,简称rs) - LVS NAT模式 - 这种模式借助iptables的nat表来实现 - 用户的请求到分发器后,通过预设的iptables规则,把请求的数据包转发到后端的rs上去 - rs需要设定网关为分发器的内网ip - 用户请求的数据包和返回给用户的数据包全部经过分发器,所以分发器成为瓶颈 - 在nat模式中,只需要分发器有公网ip即可,所以比较节省公网ip资源 - ![mark](http://oqxf7c508.bkt.clouddn.com/blog/20171113/211634303.png?imageslim) - 原理图解释: - Load Balancer,就是一个分发器;把用户的请求,分发给后端的Real Server ,Real Server这些服务器接收到请求以后,处理好请求以后,就重新丢回给Load Balancer;最后Load Balancer再返回给用户;这个模式的弊端,就是请求量、反馈量大的时候,Load Balancer的压力很大,一般最多支持10来台服务器,超过10台的话就会有力不从心;这个结构,只需要有一个公网IP,其他real server服务器全部在内网就可以实现。优点,节省很多的资源 - LVS IP Tunnel模式 - 这种模式,需要有一个公共的IP配置在分发器和所有rs上,我们把它叫做vip - 客户端请求的目标IP为vip,分发器接收到请求数据包后,会对数据包做一个加工,会把目标IP改为rs的IP,这样数据包就到了rs上 - rs接收数据包后,会还原原始数据包,这样目标IP为vip,因为所有rs上配置了这个vip,所以它会认为是它自己 - ![mark](http://oqxf7c508.bkt.clouddn.com/blog/20171113/211803012.png?imageslim) - 原理图解释: - 在load balancer与real server之间建立了虚拟通道 ip tunnel ;实际上是更改了数据包的IP;请求过来通过load balancer,通过在real server上配置的VIP;用户请求的时候,数据包里面包好的目的IP,当数据包到达load balancer的时候,load balancer会进行一个数据包目的IP的更改,然后发送到具体的real server上,通过lvs的自己的算法,进行实现到底传输到那个real server上;然后real server再解包处理,再通过一个VIP直接返回到用户,这就省略数据回到load balancer分发器的过程,这样就load balancer就没有瓶颈 - LVS DR模式 - 这种模式,也需要有一个公共的IP配置在分发器和所有rs上,也就是vip - 和IP Tunnel不同的是,它会把数据包的MAC地址修改为rs的MAC地址 - rs接收数据包后,会还原原始数据包,这样目标IP为vip,因为所有rs上配置了这个vip,所以它会认为是它自己 - ![mark](http://oqxf7c508.bkt.clouddn.com/blog/20171113/211902802.png?imageslim) # 18.8 LVS的调度算法 - 轮询 Round-Robin 简称:rr 最简单的也是最容易理解 用户请求过来,均衡的分发到rs上 - 加权轮询 Weight Round-Robin 简称:wrr 带权重的轮询,可以对机器单独设置权重,对高权重的机器发送的请求会多一些 - 最小连接 Least-Connection简称: lc 把请求发送到请求数量小的rs上 - 加权最小连接 Weight Least-Connection简称: wlc 对请求数量小的rs,加上一个权重,使他优先 - 基于局部性的最小连接 Locality-Based Least Connections简称: lblc - 带复制的基于局部性最小连接 Locality-Based Least Connections with Replication 简称: lblcr - 目标地址散列调度 Destination Hashing 简称:dh - 源地址散列调度 Source Hashing 简称: sh # 18.9 LVS NAT模式搭建(上) - NAT模式搭建 – 准备工作 - 三台机器 - 分发器,也叫调度器(简写为dir) - 内网:202.130,外网:142.147(vmware仅主机模式) - rs1 内网:202.132,设置网关为202.130 - rs2 内网:202.133,设置网关为202.130 - 三台机器上都执行执行 - systemctl stop firewalld; systemc disable firewalld - systemctl start iptables-services; iptables -F; service iptables save - [ ] 这里复习下,怎么更改主机名 hostnamectl set-hostname aming-03 ``` [[email protected] ~]# hostnamectl set-hostname aming-03 [[email protected] ~]# bash [[email protected] ~]# ``` - 分发器 ``` [[email protected] ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.202.130 netmask 255.255.255.0 broadcast 192.168.202.255 inet6 fe80::ecdd:28b7:612b:cb7 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:2e:28:f2 txqueuelen 1000 (Ethernet) RX packets 9208 bytes 6415236 (6.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11214 bytes 937882 (915.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.202.150 netmask 255.255.255.0 broadcast 192.168.202.255 ether 00:0c:29:2e:28:f2 txqueuelen 1000 (Ethernet) ens37: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.142.147 netmask 255.255.255.0 broadcast 192.168.142.255 inet6 fe80::20c:29ff:fe2e:28fc prefixlen 64 scopeid 0x20<link> ether 00:0c:29:2e:28:fc txqueuelen 1000 (Ethernet) RX packets 474 bytes 43996 (42.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 238 bytes 32037 (31.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 50 bytes 4276 (4.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 50 bytes 4276 (4.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [[email protected] ~]# ``` - ![mark](http://oqxf7c508.bkt.clouddn.com/blog/20171114/004014772.png?imageslim) - rs1 网关设置192.168.202.130 ``` [[email protected] ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.202.132 netmask 255.255.255.0 broadcast 192.168.202.255 inet6 fe80::4500:6d42:8612:4e53 prefixlen 64 scopeid 0x20<link> inet6 fe80::ecdd:28b7:612b:cb7 prefixlen 64 scopeid 0x20<link> inet6 fe80::ddac:89a0:52f8:d08d prefixlen 64 scopeid 0x20<link> ether 00:0c:29:58:33:e6 txqueuelen 1000 (Ethernet) RX packets 2300 bytes 188527 (184.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 985 bytes 105210 (102.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.202.152 netmask 255.255.255.0 broadcast 192.168.202.255 ether 00:0c:29:58:33:e6 txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 84 bytes 6884 (6.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 84 bytes 6884 (6.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [[email protected] ~]# ``` - rs2 网关设置192.168.202.130 ``` [[email protected] ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.202.133 netmask 255.255.255.0 broadcast 192.168.202.255 inet6 fe80::4500:6d42:8612:4e53 prefixlen 64 scopeid 0x20<link> inet6 fe80::ecdd:28b7:612b:cb7 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:9c:2b:f0 txqueuelen 1000 (Ethernet) RX packets 2019 bytes 173062 (169.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1969 bytes 150115 (146.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.202.153 netmask 255.255.255.0 broadcast 192.168.202.255 ether 00:0c:29:9c:2b:f0 txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 337 bytes 29100 (28.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 337 bytes 29100 (28.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [[email protected] ~]# ``` - 然后3台机器 都需要关闭下防火墙 - aming-01 ``` [[email protected] ~]# iptables -nvL Chain INPUT (policy ACCEPT 1769 packets, 147K bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 7346 packets, 401K bytes) pkts bytes target prot opt in out source destination [[email protected] ~]# ``` - aming-02 ``` [[email protected] ~]# systemctl stop firewalld [[email protected] ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service. [[email protected] ~]# iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination [[email protected] ~]# ``` - aming-03 ``` [[email protected]ming-03 ~]# systemctl stop firewalld [[email protected] ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service. [[email protected] ~]# iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination [[email protected] ~]# ``` - 按照iptables-services 包 对第三台机器 ``` [[email protected] ~]# cd /etc/yum.repos.d/ [[email protected] yum.repos.d]# ls CentOS-Base.repo CentOS-fasttrack.repo CentOS-Vault.repo CentOS-CR.repo CentOS-Media.repo epel.repo CentOS-Debuginfo.repo CentOS-Sources.repo epel-testing.repo [[email protected] yum.repos.d]# mv epel.repo epel.repo.1 改名是因为epel.repos包是国外的资源,下载速度慢 [[email protected] yum.repos.d]# yum lish |grep iptables-service 没有该命令:lish。请使用 /usr/bin/yum --help [[email protected] yum.repos.d]# yum list |grep iptables-service iptables-services.x86_64 1.4.21-18.2.el7_4 updates [[email protected] yum.repos.d]# yum install -y iptables-services 已加载插件:fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.163.com * extras: centos.ustc.edu.cn * updates: mirrors.163.com 正在解决依赖关系 --> 正在检查事务 ---> 软件包 iptables-services.x86_64.0.1.4.21-18.2.el7_4 将被 安装 --> 正在处理依赖关系 iptables = 1.4.21-18.2.el7_4,它被软件包 iptables-services-1.4.21-18.2.el7_4.x86_64 需要 --> 正在检查事务 ---> 软件包 iptables.x86_64.0.1.4.21-17.el7 将被 升级 ---> 软件包 iptables.x86_64.0.1.4.21-18.2.el7_4 将被 更新 --> 解决依赖关系完成 依赖关系解决 ============================================================================================ Package 架构 版本 源 大小 ============================================================================================ 正在安装: iptables-services x86_64 1.4.21-18.2.el7_4 updates 51 k 为依赖而更新: iptables x86_64 1.4.21-18.2.el7_4 updates 428 k 事务概要 ============================================================================================ 安装 1 软件包 升级 ( 1 依赖软件包) 总下载量:479 k Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. (1/2): iptables-services-1.4.21-18.2.el7_4.x86_64.rpm | 51 kB 00:00:00 (2/2): iptables-1.4.21-18.2.el7_4.x86_64.rpm | 428 kB 00:00:00 -------------------------------------------------------------------------------------------- 总计 520 kB/s | 479 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction 正在更新 : iptables-1.4.21-18.2.el7_4.x86_64 1/3 正在安装 : iptables-services-1.4.21-18.2.el7_4.x86_64 2/3 清理 : iptables-1.4.21-17.el7.x86_64 3/3 验证中 : iptables-services-1.4.21-18.2.el7_4.x86_64 1/3 验证中 : iptables-1.4.21-18.2.el7_4.x86_64 2/3 验证中 : iptables-1.4.21-17.el7.x86_64 3/3 已安装: iptables-services.x86_64 0:1.4.21-18.2.el7_4 作为依赖被升级: iptables.x86_64 0:1.4.21-18.2.el7_4 完毕! [[email protected] yum.repos.d]# ``` - 对第二台机器按照iptables-series包 ``` [[email protected] ~]# yum install -y iptables-services 已加载插件:fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.btte.net * epel: mirrors.ustc.edu.cn * extras: mirrors.163.com * updates: mirrors.163.com 正在解决依赖关系 --> 正在检查事务 ---> 软件包 iptables-services.x86_64.0.1.4.21-18.2.el7_4 将被 安装 --> 正在处理依赖关系 iptables = 1.4.21-18.2.el7_4,它被软件包 iptables-services-1.4.21-18.2.el7_4.x86_64 需要 --> 正在检查事务 ---> 软件包 iptables.x86_64.0.1.4.21-17.el7 将被 升级 ---> 软件包 iptables.x86_64.0.1.4.21-18.2.el7_4 将被 更新 --> 解决依赖关系完成 依赖关系解决 ======================================================================================================= Package 架构 版本 源 大小 ======================================================================================================= 正在安装: iptables-services x86_64 1.4.21-18.2.el7_4 updates 51 k 为依赖而更新: iptables x86_64 1.4.21-18.2.el7_4 updates 428 k 事务概要 ======================================================================================================= 安装 1 软件包 升级 ( 1 依赖软件包) 总下载量:479 k Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. (1/2): iptables-services-1.4.21-18.2.el7_4.x86_64.rpm | 51 kB 00:00:00 (2/2): iptables-1.4.21-18.2.el7_4.x86_64.rpm | 428 kB 00:00:03 ------------------------------------------------------------------------------------------------------- 总计 124 kB/s | 479 kB 00:00:03 Running transaction check Running transaction test Transaction test succeeded Running transaction 正在更新 : iptables-1.4.21-18.2.el7_4.x86_64 1/3 正在安装 : iptables-services-1.4.21-18.2.el7_4.x86_64 2/3 清理 : iptables-1.4.21-17.el7.x86_64 3/3 验证中 : iptables-services-1.4.21-18.2.el7_4.x86_64 1/3 验证中 : iptables-1.4.21-18.2.el7_4.x86_64 2/3 验证中 : iptables-1.4.21-17.el7.x86_64 3/3 已安装: iptables-services.x86_64 0:1.4.21-18.2.el7_4 作为依赖被升级: iptables.x86_64 0:1.4.21-18.2.el7_4 完毕! [[email protected] ~]# ``` - 对第二台aming-02 ,先把epel.repo 改名 再重新下载就快了 ``` [[email protected] ~]# cd /etc/yum.repos.d/ [[email protected] yum.repos.d]# ls CentOS-Base.repo CentOS-Debuginfo.repo CentOS-Media.repo CentOS-Vault.repo epel-testing.repo CentOS-CR.repo CentOS-fasttrack.repo CentOS-Sources.repo epel.repo [[email protected] yum.repos.d]# mv epel.repo epel.repo.1 ``` - 查看哪个包 都安装了哪些文件 ``` [[email protected] yum.repos.d]# systemctl start iptables-services Failed to start iptables-services.service: Unit not found. [[email protected] yum.repos.d]# rpm -ql iptables-services /etc/sysconfig/ip6tables /etc/sysconfig/iptables /usr/lib/systemd/system/ip6tables.service /usr/lib/systemd/system/iptables.service /usr/libexec/initscripts/legacy-actions/ip6tables /usr/libexec/initscripts/legacy-actions/ip6tables/panic /usr/libexec/initscripts/legacy-actions/ip6tables/save /usr/libexec/initscripts/legacy-actions/iptables /usr/libexec/initscripts/legacy-actions/iptables/panic /usr/libexec/initscripts/legacy-actions/iptables/save /usr/libexec/iptables /usr/libexec/iptables/ip6tables.init /usr/libexec/iptables/iptables.init [[email protected] yum.repos.d]# ``` - 开启iptables服务 启动服务 systemctl start iptables - 设置开机启动 systemctl enable iptables ``` [[email protected] yum.repos.d]# systemctl start iptables [[email protected] yum.repos.d]# systemctl enable iptables Created symlink from /etc/systemd/system/basic.target.wants/iptables.service to /usr/lib/systemd/system/iptables.service. [[email protected] yum.repos.d]# ``` - 第三台也是 ``` [[email protected] yum.repos.d]# systemctl start iptables [[email protected] yum.repos.d]# systemctl enable iptables Created symlink from /etc/systemd/system/basic.target.wants/iptables.service to /usr/lib/systemd/system/iptables.service. [[email protected] yum.repos.d]# ``` - 查看表,是否使用了netfilter服务了 ``` [[email protected] yum.repos.d]# iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 65 5144 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT 48 packets, 4984 bytes) pkts bytes target prot opt in out source destination [[email protected] yum.repos.d]# ``` - 清空表的规则,以便后续实验 ``` [[email protected] yum.repos.d]# iptables -F [[email protected] yum.repos.d]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ 确定 ] [[email protected] yum.repos.d]# ``` - 以上步骤需要检查另外两台 rs 机器是否开启firewalld 服务,如果开启切换为netfilter服务 - 对第二台 aming-02 这样做 ``` [[email protected] yum.repos.d]# cd [[email protected] ~]# iptables -F [[email protected] ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ 确定 ] [[email protected] ~]# ``` - 把三台机器额 selinux 都关闭,临时关闭 setenforce 0 永久关闭 vi /etc/selinux/config,把里面的SELINUX=disabled - 设置网关,必须把rs1 rs2 设置为分发器的ip地址192.168.202.130,设置好后 这俩台机器局不能访问外网了 - rs1 aming-02 ``` [[email protected] ~]# systemctl stop firewalld [[email protected] ~]# systemctl disable firewalld [[email protected] ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 [[email protected] ~]# systemctl restart network.service [[email protected] ~]# ``` - rs2 aming-03 ``` [[email protected] yum.repos.d]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 [[email protected] yum.repos.d]# systemctl restart network.service [[email protected] yum.repos.d]# ``` - 看下第一台分发器上 ,防火墙都关闭了 ``` [[email protected] ~]# iptables -nvL Chain INPUT (policy ACCEPT 1923 packets, 159K bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 6713 packets, 387K bytes) pkts bytes target prot opt in out source destination [[email protected] ~]# getenforce Permissive [[email protected] ~]# ``` - 准备工作就到这里了 # 18.10 LVS NAT模式搭建(下) - 在dir上安装ipvsadm 在分发器dir上,安装ipvsadm ,这个是实现 lvs 的一个重要的工具,缺少这个工具,将没有办法实现 lvs 的功能 - yum install -y ipvsdam - 因为epel.repo的包是国外资源,所以需要改个名字,再去yum 下载包 ``` [[email protected] ~]# cd /etc/yum.repos.d [[email protected] yum.repos.d]# ls CentOS-Base.repo CentOS-Debuginfo.repo CentOS-Media.repo CentOS-Vault.repo epel-testing.repo CentOS-CR.repo CentOS-fasttrack.repo CentOS-Sources.repo epel.repo [[email protected] yum.repos.d]# mv epel.repo epel.repo.1 [[email protected] yum.repos.d]# yum install -y ipvsadm 已加载插件:fastestmirror Determining fastest mirrors * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com 正在解决依赖关系 --> 正在检查事务 ---> 软件包 ipvsadm.x86_64.0.1.27-7.el7 将被 安装 --> 解决依赖关系完成 依赖关系解决 ====================================================================================================================== Package 架构 版本 源 大小 ====================================================================================================================== 正在安装: ipvsadm x86_64 1.27-7.el7 base 45 k 事务概要 ====================================================================================================================== 安装 1 软件包 总下载量:45 k 安装大小:75 k Downloading packages: ipvsadm-1.27-7.el7.x86_64.rpm | 45 kB 00:00:01 Running transaction check Running transaction test Transaction test succeeded Running transaction 正在安装 : ipvsadm-1.27-7.el7.x86_64 1/1 验证中 : ipvsadm-1.27-7.el7.x86_64 1/1 已安装: ipvsadm.x86_64 0:1.27-7.el7 完毕! [[email protected] yum.repos.d]# ``` - 在dir上编写脚本,vim /usr/local/sbin/lvs_nat.sh//内容如下 - 编写一个脚本,用脚本进行维护会比较方便,不用一条命令一条命令的进行操作 ``` [[email protected] yum.repos.d]# vi /usr/local/sbin/lvs_nat.sh #! /bin/bash # director 服务器上开启路由转发功能 echo 1 > /proc/sys/net/ipv4/ip_forward # 关闭icmp的重定向 echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects # 注意区分网卡名字,阿铭的两个网卡分别为ens33和ens37 echo 0 > /proc/sys/net/ipv4/conf/ens33/send_redirects echo 0 > /proc/sys/net/ipv4/conf/ens37/send_redirects # director 设置nat防火墙 iptables -t nat -F iptables -t nat -X iptables -t nat -A POSTROUTING -s 192.168.202.0/24 -j MASQUERADE # director设置ipvsadm IPVSADM=‘/usr/sbin/ipvsadm‘ $IPVSADM -C $IPVSADM -A -t 192.168.142.147:80 -s lc -p 3 $IPVSADM -a -t 192.168.142.147:80 -r 192.168.202.132:80 -m -w 1 $IPVSADM -a -t 192.168.142.147:80 -r 192.168.202.133:80 -m -w 1 ~ ~ ~ :wq [[email protected] yum.repos.d]# vi /usr/local/sbin/lvs_nat.sh [[email protected] yum.repos.d]# ``` - 可以执行下,没有输出,就是没有错误,一般有错误信息会直接报错出来 ``` [[email protected] yum.repos.d]# sh /usr/local/sbin/lvs_nat.sh [[email protected] yum.repos.d]# ``` - 两台rs上都安装nginx - 设置两台rs的主页,做一个区分,也就是说直接curl两台rs的ip时,得到不同的结果 - 浏览器里访问192.168.142.147,多访问几次看结果差异 - 先去第二台机器上看下有没有启动nginx 服务 ``` [[email protected] ~]# ps aux |grep nginx root 4280 0.0 0.0 112680 980 pts/0 S+ 20:49 0:00 grep --color=auto nginx [[email protected] ~]# netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 911/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1659/master tcp6 0 0 :::3306 :::* LISTEN 1294/mysqld tcp6 0 0 :::111 :::* LISTEN 1/systemd tcp6 0 0 :::22 :::* LISTEN 911/sshd tcp6 0 0 ::1:25 :::* LISTEN 1659/master [[email protected] ~]# systemctl start nginx [[email protected] ~]# !ps ps aux |grep nginx root 4295 0.0 0.2 122792 2080 ? Ss 20:49 0:00 nginx: master process /usr/sbin/ngin nginx 4296 0.0 0.3 123224 3124 ? S 20:49 0:00 nginx: worker process root 4298 0.0 0.0 112680 980 pts/0 S+ 20:49 0:00 grep --color=auto nginx [[email protected] ~]# curl localhost backup backup. [[email protected] ~]# [[email protected] ~]# vi /usr/share/nginx/index.html [[email protected] ~]# vi /usr/share/nginx/html/index.html aming02. ~ ~ ~ ~ :wq [[email protected] ~]# vi /usr/share/nginx/html/index.html [[email protected] ~]# [[email protected] ~]# curl localhost aming02. [[email protected] ~]# ``` - 第三台也是这样配置,开启nginx服务 ``` [[email protected] yum.repos.d]# systemctl start nginx [[email protected] yum.repos.d]# vi /usr/share/nginx/html/index.html aming03. ~ ~ ~ ~ :wq [[email protected] yum.repos.d]# vi /usr/share/nginx/html/index.html [[email protected] yum.repos.d]# curl localhost aming03. [[email protected] yum.repos.d]# ``` - 我们现在可以做测试了,直接用windows 浏览器去访问192.168.142.147 - ![mark](http://oqxf7c508.bkt.clouddn.com/blog/20171114/211736692.png?imageslim) - 那我们把那个3秒取消掉,再看效果 - 把 $IPVSADM -A -t 192.168.142.147:80 -s lc -p 0 ,3 改为0 ``` [[email protected] yum.repos.d]# !vi vi /usr/local/sbin/lvs_nat.sh #! /bin/bash # director 服务器上开启路由转发功能 echo 1 > /proc/sys/net/ipv4/ip_forward # 关闭icmp的重定向 echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects # 注意区分网卡名字,阿铭的两个网卡分别为ens33和ens37 echo 0 > /proc/sys/net/ipv4/conf/ens33/send_redirects echo 0 > /proc/sys/net/ipv4/conf/ens37/send_redirects # director 设置nat防火墙 iptables -t nat -F iptables -t nat -X iptables -t nat -A POSTROUTING -s 192.168.202.0/24 -j MASQUERADE # director设置ipvsadm IPVSADM=‘/usr/sbin/ipvsadm‘ $IPVSADM -C $IPVSADM -A -t 192.168.142.147:80 -s lc -p 0 $IPVSADM -a -t 192.168.142.147:80 -r 192.168.202.132:80 -m -w 1 $IPVSADM -a -t 192.168.142.147:80 -r 192.168.202.133:80 -m -w 1 ~ :wq [[email protected] yum.repos.d]# sh /usr/local/sbin/lvs_nat.sh invalid timeout value `0‘ specified Memory allocation problem Memory allocation problem [[email protected] yum.repos.d]# ``` - 报错了,因为我们有些操作是重复的 ``` [[email protected] yum.repos.d]# iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 3 packets, 480 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 1 packets, 328 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1 packets, 328 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 1 packets, 328 bytes) pkts bytes target prot opt in out source destination 2 152 MASQUERADE all -- * * 192.168.202.0/24 0.0.0.0/0 [[email protected] yum.repos.d]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn [[email protected] yum.repos.d]# ``` - 说明刚刚我们执行的脚本没有成功 ``` [[email protected] ~]# !sh sh /usr/local/sbin/lvs_nat.sh invalid timeout value `0‘ specified Memory allocation problem Memory allocation problem [[email protected] ~]# ``` - invalid timeout value `0‘ specified 不能设置为0 ,那把-p去掉 ``` [[email protected] ~]# !vi vi /usr/local/sbin/lvs_nat.sh #! /bin/bash # director 服务器上开启路由转发功能 echo 1 > /proc/sys/net/ipv4/ip_forward # 关闭icmp的重定向 echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects # 注意区分网卡名字,阿铭的两个网卡分别为ens33和ens37 echo 0 > /proc/sys/net/ipv4/conf/ens33/send_redirects echo 0 > /proc/sys/net/ipv4/conf/ens37/send_redirects # director 设置nat防火墙 iptables -t nat -F iptables -t nat -X iptables -t nat -A POSTROUTING -s 192.168.202.0/24 -j MASQUERADE # director设置ipvsadm IPVSADM=‘/usr/sbin/ipvsadm‘ $IPVSADM -C $IPVSADM -A -t 192.168.142.147:80 -s lc $IPVSADM -a -t 192.168.142.147:80 -r 192.168.202.132:80 -m -w 1 $IPVSADM -a -t 192.168.142.147:80 -r 192.168.202.133:80 -m -w 1 ~ :wq [[email protected] ~]# !sh sh /usr/local/sbin/lvs_nat.sh [[email protected] ~]# ``` - 再来看看,ipvsadm -ln 出来了数据 ``` [[email protected] ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.142.147:80 lc -> 192.168.202.132:80 Masq 1 0 0 -> 192.168.202.133:80 Masq 1 0 0 [[email protected] ~]# [[email protected] ~]# [[email protected] ~]# ``` - 再换成 $IPVSADM -A -t 192.168.142.147:80 -s rr,之前是lc ``` [[email protected] ~]# vi /usr/local/sbin/lvs_nat.sh #! /bin/bash # director 服务器上开启路由转发功能 echo 1 > /proc/sys/net/ipv4/ip_forward # 关闭icmp的重定向 echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects # 注意区分网卡名字,阿铭的两个网卡分别为ens33和ens37 echo 0 > /proc/sys/net/ipv4/conf/ens33/send_redirects echo 0 > /proc/sys/net/ipv4/conf/ens37/send_redirects # director 设置nat防火墙 iptables -t nat -F iptables -t nat -X iptables -t nat -A POSTROUTING -s 192.168.202.0/24 -j MASQUERADE # director设置ipvsadm IPVSADM=‘/usr/sbin/ipvsadm‘ $IPVSADM -C $IPVSADM -A -t 192.168.142.147:80 -s rr $IPVSADM -a -t 192.168.142.147:80 -r 192.168.202.132:80 -m -w 1 $IPVSADM -a -t 192.168.142.147:80 -r 192.168.202.133:80 -m -w 1 ~ :wq ``` - 再来重新执行下 ``` [[email protected] ~]# vi /usr/local/sbin/lvs_nat.sh [[email protected] ~]# !sh sh /usr/local/sbin/lvs_nat.sh [[email protected] ~]# ``` - ![mark](http://oqxf7c508.bkt.clouddn.com/blog/20171114/212713940.png?imageslim) - 用curl来测测,访问外网,我们设置的 192.168.142.147 ,还是很均衡的,一下aming02 一下aming03 ``` [[email protected] ~]# curl 192.168.142.147 aming02. [[email protected] ~]# curl 192.168.142.147 aming03. [[email protected] ~]# curl 192.168.142.147 aming02. [[email protected] ~]# curl 192.168.142.147 aming03. [[email protected] ~]# curl 192.168.142.147 aming02. [[email protected] ~]# curl 192.168.142.147 aming03. [[email protected] ~]# ```
以上是关于18.1 集群介绍 18.2 keepalived介绍 18.3/18.4/18.5 用keepalived配置高可用集群的主要内容,如果未能解决你的问题,请参考以下文章
18.1集群介绍 18.2 keepalived介绍18.3/18.4/18.5 用keepalived配置高可用集群
18.1 集群介绍;18.2 keepalived介绍;18.3-18.5用keepalived配置
18.1 集群介绍 18.2 keepalived介绍 18.3/18.4/18.5 用keepalived配置高可用集群
18.1 集群介绍 18.2 keepalived介绍 18.3/18.4/18.5 用keepal
18.1 集群介绍 18.2 keepalived介绍 18.3/18.4/18.5 用keepal
18.1 集群介绍 18.2 keepalived介绍 18.3/18.4/18.5 用keepalived配置高可用集群