环境:
[[email protected] ~]# uname -a Linux db02 2.6.32-696.el6.x86_64 #1 SMP Tue Mar 21 19:29:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux [[email protected] ~]# cat /etc/redhat-release CentOS release 6.9 (Final)
keepalived软件介绍
keepalived软件能干什么?
Keepalived软件起初是专为LVS负载均衡软件设计的,用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP功能
Keepalived软件的官方站点是 http://www.keepalived.org
keppalived服务的三个重要功能
- 管理LVS负载均衡软件
- 实现对LVS集群节点健康检查功能
- 作为系统网络服务的高可用功能
keepalived软件工作原理
1、Keepalived高可用对之间是通过VRRP通信的
1) VRRP,全称Virtual Router Redundancy Protocol,中文名为虚拟路由冗余协议,VRRP的出现是为了解决静态路由的单点故障。
2) VRRP是通过一种竞选协议机制来将路由任务交给某台VRRP路由器的。
3) VRRP用IP多播的方式(默认多播地址(224.0.0.18)实现高可用对之间通信。
4) 工作时主节点发包,备节点接包,当备节点接收不到主节点发的数据包的时候,就启动接管程序接管主节点的资源。备节点可以有多个,通过优先级竞选,但一般 Keepalived系统运维工作中都是一对。
5) VRRP使用了加密协议加密数据,但Keepalived官方目前还是推荐用明文的方式配置认证类型和密码。
2、Keepalived 服务的工作原理
Keepalived高可用对之间是通过VRRP进行通信的,VRRP是通过竞选机制来确定主备的,主的优先级高于备,因此,工作时主会优先获得所有的资源,备节点处于等待状态,当主挂了的时候,备节点就会接管主节点的资源,然后顶替主节点对外提供服务。
在Keepalived服务对之间,只有作为主的服务器会一直发送VRRP广播包,告诉备它还活着,此时备不会抢占主,当主不可用时,即备监听不到主发送的广播包时,就会启动相关股务接管资源,保证业务的连续性。接管速度最快可以小于1秒。
keepalived服务部署准备环境
需要的服务器介绍:需要三台web服务器,两台lb(负载)服务器
web服务器:(每台web有两个站点bbs和www)
web01:172.16.1.8(内网)10.0.0.8(外网)
web02:172.16.1.7(内网)10.0.0.7(外网)
web03:172.16.1.9(内网)10.0.0.9(外网)
web服务环境统一
web集群服务器配置文件环境统一(web01 web02 web03 配置均一致)
cat www.conf server { listen 80; server_name www.zxpo.com; location / { root html/www; index index.html index.htm; } } cat bbs.conf server { listen 80; server_name bbs.zxpo.com; location / { root html/bbs; index index.html index.htm; } } 同步三台web服务器配置: scp -rp {www.conf,bbs.conf} 172.16.1.7:/application/nginx/conf/extra/ scp -rp {www.conf,bbs.conf} 172.16.1.9:/application/nginx/conf/extra/ web服务主配置文件环境统一: [[email protected] extra]# cat ../nginx.conf worker_processes 1; events { worker_connections 1024; } http { log_format main ‘$remote_addr - $remote_user [$time_local] "$request" ‘ ‘$status $body_bytes_sent "$http_referer" ‘ ‘"$http_user_agent" "$http_x_forwarded_for"‘; access_log logs/access.log main; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; include extra/www.conf; include extra/bbs.conf; } scp -rp ../nginx.conf 172.16.1.9:/application/nginx/conf/ scp -rp ../nginx.conf 172.16.1.7:/application/nginx/conf/ web01测试环境准备: [[email protected] www]# for name in www bbs;do echo $name `hostname` >/application/nginx/html/$name/nana.html;done [[email protected] www]# for name in www bbs;do cat /application/nginx/html/$name/nana.html;done www web01 bbs web01 web02测试环境准备: [[email protected] conf]# for name in www bbs;do echo $name `hostname` >/application/nginx/html/$name/nana.html;done [[email protected] conf]# for name in www bbs;do cat /application/nginx/html/$name/nana.html;done www web02 bbs web02 web03测试环境准备: [[email protected] conf]# for name in www bbs;do echo $name `hostname` >/application/nginx/html/$name/nana.html;done [[email protected] conf]# for name in www bbs;do cat /application/nginx/html/$name/nana.html;done www web03 bbs web03 web环境测试结果:(在lb负载均衡服务器上面进行) [[email protected] www]# curl -H host:www.zxpo.com 10.0.0.8/nana.html www web01 [[email protected] www]# curl -H host:bbs.zxpo.com 10.0.0.8/nana.html bbs web01 [[email protected] www]# curl -H host:www.zxpo.com 10.0.0.7/nana.html www web02 [[email protected] www]# curl -H host:bbs.zxpo.com 10.0.0.7/nana.html bbs web02 [[email protected] www]# curl -H host:www.zxpo.com 10.0.0.9/nana.html www web03 [[email protected] www]# curl -H host:bbs.zxpo.com 10.0.0.9/nana.html bbs web03
负载均衡服务器环境统一
nginx反向代理负载均衡集群服务器配置文件环境统一
[[email protected] conf]# cat nginx.conf ####lb01和lb02 nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream server_pools { server 10.0.0.7:80; server 10.0.0.8:80; server 10.0.0.9:80; } server { listen 80; server_name www.zxpo.com; location / { proxy_pass http://server_pools; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; } } server { listen 80; server_name bbs.zxpo.com; location / { proxy_pass http://server_pools; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; } } }
keepalived部署过程
keepalived软件安装
yum install -y keepalived
启动服务、进行默认配置测试
启动lb01 lb02的keepalived服务
/etc/init.d/keepalived start
修改配置文件
[[email protected] conf]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id lb01 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.3/24 dev eth0 label eth0:1 } }
配置文件组成部分
- GLOBAL CONFIGURATION ###全局定义(默认配置文件的01-13行)
- VRRPD CONFIGURATION ###虚拟ip的配置(默认配置文件15-30行)
- LVS CONFIGURATION ###配置与管理lvs
! Configuration File for keepalived global_defs { --- 全局配置标题 notification_email { --- 定义管理员邮箱信息 110@qq.com 110@qq.com } notification_email_from [email protected]163.com --- 定义利用什么邮箱发送邮件 smtp_server smtp.163.com --- 定义邮件服务器信息 smtp_connect_timeout 30 --- 定义邮件发送超时时间 router_id a01 --- (重点参数)局域网keepalived主机身份标识信息每一个keepalived主机身份标识信息唯一 } vrrp_instance VI_1 { --- vrrp协议相关配置(vip地址设置) state MASTER --- keepalived角色描述(状态)信息,可以配置参数(MASTER BACKUP) interface eth0 --- 表示将生成虚IP地址,设置在指定的网卡上 virtual_router_id 51 --- 表示keepalived家族标识信息 priority 100 --- keepalived服务竞选主备服务器优先级设置(越大越优先) advert_int 1 --- 主服务组播包发送间隔时间 authentication { --- 主备主机之间通讯认证机制, auth_type PASS --- 采用明文认证机制 auth_pass 1111 --- 编写明文密码 } virtual_ipaddress { --- 设置虚拟IP地址信息 10.0.0.3 } }
我们需要的配置文件
global_defs { router_id LVS_01 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.3/24 dev eth0 label eth0:1 } }
global_defs { router_id LVS_02 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.3/24 dev eth0 label eth0:1 } }
说明:主备服务器配置文件区别
router_id 不同
state BACKUP 不同
priority 不同
企业keepalived服务应用
更改nginx反向代理,只监听虚拟ip
修改nginx反向代理配置文件只监听vip地址
[[email protected] keepalived]# cat /application/nginx/conf/nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream server_pools { server 10.0.0.7:80; server 10.0.0.8:80; server 10.0.0.9:80; } server { listen 10.0.0.3:80; server_name www.zxpo.com; location / { proxy_pass http://server_pools; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; } } server { listen 10.0.0.3:80; server_name bbs.zxpo.com; location / { proxy_pass http://server_pools; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; } } server { listen 10.0.0.3:80; server_name blog.zxpo.com; location / { proxy_pass http://server_pools; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; } } }
做到这其实可以启动nginx了,但是会失败,因为需要优化内核
[[email protected] ~]# cat /proc/sys/net/ipv4/ip_nonlocal_bind 0 这个文件默认值为一,如果为1,就代表本地就算没有这个ip也可以监听
优化改动内核命令
echo "net.ipv4.ip_nonlocal_bind=1" >>/etc/sysctl.conf sysctl -p
最后一行为 net.ipv4.ip_nonlocal_bind = 1 就可以了。
启动nginx检查监听结果
[[email protected] ~]# netstat -lntup |grep nginx tcp 0 0 10.0.0.3:80 0.0.0.0:* LISTEN 11279/nginx