负载均衡的keeplived服务

Posted givenchy_yzl

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了负载均衡的keeplived服务相关的知识,希望对你有一定的参考价值。

一、高可用的介绍

1.什么是高可用
一般是指2台机器启动着完全相同的业务系统,当有一台机器down机了,另外一台服务器就能快速的接管,对于访问的用户是无感知的。
keepalived高可用只是用在企业内部的,主要用来解决单点故障问题
2.高可用可以用什么
#硬件通常使用  F5
#软件通常使用  keepalived
3.keepalived是如何实现高可用的
keepalived软件是基于VRRP协议实现的,VRRP是虚拟路由冗余协议,主要用于解决单点故障问题





如何才能做到出现故障自动转移,此时VRRP就出现了,我们的VRRP其实是通过软件或者硬件的形式在Master和Backup外面增加一个虚拟的MAC地址(VMAC)与虚拟IP地址(VIP),那么在这种情况下,PC请求VIP的时候,无论是Master处理还是Backup处理,PC仅会在ARP缓存表中记录VMAC与VIP的信息。
4.高可用keepalived核心概念
1、如何确定谁是主节点谁是备节点(选举投票,优先级)
2、如果Master故障,Backup自动接管,那么Master恢复后会夺权吗(抢占试、非抢占式)
3、如果两台服务器都认为自己是Master会出现什么问题(脑裂)

二、高可用安装配置

在这里插入图片描述

2.保证lb01和lb02配置完全一致
[root@lb01 conf.d]# scp -r /etc/nginx/ssl_key 172.16.1.5:/etc/nginx/
[root@lb01 conf.d]# scp ./* 172.16.1.5:/etc/nginx/conf.d/
3.安装keepalived
[root@lb01 ~]# yum install -y keepalived
[root@lb02 ~]# yum install -y keepalived
4.配置keepalived主/从节点
#查看配置文件
[root@lb01 ~]# rpm -qc keepalived
/etc/keepalived/keepalived.conf
/etc/sysconfig/keepalived

[root@lb01 ~]# vim /etc/keepalived/keepalived.conf 
global_defs {
   router_id lb01
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.3
    }
}
[root@lb02 ~]# rpm -qc keepalived
/etc/keepalived/keepalived.conf
/etc/sysconfig/keepalived

[root@lb02 ~]# vim /etc/keepalived/keepalived.conf 
global_defs {
   router_id lb02
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.3
    }
}
5.配置keeplived文件介绍
#查看配置文件
[root@lb01 ~]# rpm -qc keepalived
/etc/keepalived/keepalived.conf
/etc/sysconfig/keepalived

#配置主节点配置文件
[root@lb01 ~]# vim /etc/keepalived/keepalived.conf 
global_defs {					#全局配置
   router_id lb01				#身份验证
}

vrrp_instance VI_1 {
    state MASTER				#状态,只有MASTER和BACKUP,MASTER是主,BACKUP是备
    interface eth0				#网卡绑定,心跳检测
    virtual_router_id 51		#虚拟路由标识,组id,把master和backup判断为一组
    priority 100				#优先级(真正判断是主是从的条件)(值越大优先级越高)
    advert_int 3				#检测状态间隔时间(单位是秒)
    authentication {			#认证
        auth_type PASS			#认证方式
        auth_pass 1111			#认证密码指定
    }
    virtual_ipaddress {
        192.168.1.3				#虚拟的VIP地址
    }
}

在这里插入图片描述

7.启动keepalived
#启动时查看日志
[root@lb02 ~]# tail -f /var/log/messages
#先启动从
[root@lb02 ~]# systemctl start keepalived

#启动时查看日志
[root@lb01 ~]# tail -f /var/log/messages
#再启动主
[root@lb01 ~]# systemctl start keepalived
8.配置keepalivedr日志
一、修改 /etc/sysconfig/keepalived
把KEEPALIVED_OPTIONS="-D" 修改为KEEPALIVED_OPTIONS="-D -d -S 0"
#其中-S指定syslog的facility
二、重启服务
service keepalived restart
三、设置syslog,修改/etc/syslog.conf,添加内容如下
# keepalived -S 0
local0.* /var/log/keepalived.log
注意:local0是l是字符L的小写

三、高可用keepalived 抢占式和非抢占式

1.当两个节点都启动时
#由于节点1的优先级高于节点2,所以VIP在节点1上面
[root@lb01 ~]# ip addr | grep 10.0.0.3
    inet 10.0.0.3/32 scope global eth0
2.停止主节点的keepalived
[root@lb01 ~]# systemctl stop keepalived

#节点2检测不到节点1的心跳,主动接管VIP
[root@lb02 ~]# ip addr | grep 10.0.0.3
    inet 10.0.0.3/32 scope global eth0
3.重新启动主节点
[root@lb01 ~]# systemctl start keepalived
[root@lb01 ~]# ip addr | grep 10.0.0.3
    inet 10.0.0.3/32 scope global eth0

所以keepalived默认是抢占式的创建VIP,

4.配置非抢占式

1.修改节点状态,两边状态都必须是BACKUP
2.两个节点都要加上 nopreempt
3.优先级仍保持不同
4.如果配置非抢占式VIP,keepalived主机状态必须一致

#从节点配置
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    nopreempt
    ... ...
}

#主节点配置
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    nopreempt
    ... ...
}
5.通过windows验证mac地址切换
#查看VIP在节点1上面
[root@lb01 ~]# ip addr | grep 10.0.0.3
    inet 10.0.0.3/32 scope global eth0
    
#windows查看mac地址
C:\\Users\\admin> arp -a

#将节点1的keepalived停止
[root@lb01 ~]# systemctl stop keepalived

#节点2查看VIP
[root@lb02 ~]# ip addr | grep 10.0.0.3
    inet 10.0.0.3/32 scope global eth0
    
#再次查看MAC地址
C:\\Users\\admin> arp -a

6.测试页面访问

#配置hosts
10.0.0.3 blog.linux.com

四、高可用keepalived的脑裂

由于某些原因,导致两台keepalived高可用服务器在指定时间内,无法检测到对方是否存活,各自去调用资源,分配工作,而此时两台服务器都还活着并且在工作。
1.脑裂的故障
1.服务器网线松动,网络故障
2.服务器硬件发生损坏,硬件故障
3.主备服务器之间开启了防火墙
2.开启防火墙
[root@lb01 ~]# systemctl start firewalld
[root@lb02 ~]# systemctl start firewalld
4.访问页面没有问题
#访问浏览器因为开启防火墙,所以访问不了站点,需要配置开启http服务
[root@lb02 ~]# firewall-cmd --add-service=http
[root@lb02 ~]# firewall-cmd --add-service=https
5.解决脑裂的办法
#干掉一台服务
[root@lb02 ~]# systemctl stop keepalived

#判断是否有脑裂现象
先做信任,免密登录
[root@lb01 ~]# vim check_naolie.sh
#!/bin/bash
# 做免密
VIP="192.168.15.3"
MASTERIP="172.16.1.6"
BACKUPIP="172.16.1.5"

while true; do
    # 探测VIP
    PROBE='ip a | grep "${VIP}"'
    ssh ${MASTERIP}  "${PROBE}" > /dev/null
    MASTER_STATU=$?
    ssh ${BACKUPIP}  "${PROBE}" > /dev/null
    BACKUP_STATU=$?
    if [[ $MASTER_STATU -eq 0 && $BACKUP_STATU -eq 0 ]];then
        ssh ${BACKUPIP}  "systemctl stop keepalived.service"
    fi
    sleep 2
done

-eq		等于
-ne		不等于
-ge		大于等于
-gt		大于
-le		小于等于
-lt		小于

五、高可用keepalived和nginx

1.域名解析到VIP
1.nginx默认监听所有IP
2.nginx故障切换脚本
#如果nginx宕机,用户请求页面会失败,但是keepalive没有关闭,VIP仍然在nginx挂掉了的机器上,导致影响业务;
#我们应该编写一个脚本,判断nginx状态,如果nginx挂掉,先尝试重启nginx,如果启动不了则关掉keepalived

[root@lb01 ~]# vim check_web.sh 
#!/bin/bash

nginxnum=`ps -ef | grep [n]ginx | wc -l`

if [ $nginxnum -eq 0 ];then
  systemctl start nginx
  sleep 3
  nginxnum=`ps -ef | grep [n]ginx | wc -l`

  if [ $nginxnum -eq 0 ];then
    systemctl stop keepalived.service
  fi
fi


[root@lb01 ~]# vim /server/scripts/check_web.sh
#!/bin/sh
nginxpid=$(ps -C nginx --no-header|wc -l)

#1.判断Nginx是否存活,如果不存活则尝试启动Nginx
if [ $nginxpid -eq 0 ];then
    systemctl start nginx
    sleep 3
    #2.等待3秒后再次获取一次Nginx状态
    nginxpid=$(ps -C nginx --no-header|wc -l) 
    #3.再次进行判断, 如Nginx还不存活则停止Keepalived,让地址进行漂移,并退出脚本  
    if [ $nginxpid -eq 0 ];then
        systemctl stop keepalived
   fi
fi
3.调用脚本
[root@lb01 ~]# vim /etc/keepalived/keepalived.conf 
global_defs {
   router_id lb01
}

#每5秒执行一次脚本,脚本执行完成时间不能超过5秒,否则会重新执行脚本,死循环
vrrp_script check_web {
    script "/root/check_web.sh"
    interval 5
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.0.0.3
    }
    #调用计划脚本
	track_script {
    	check_web
	}
}
#给脚本添加执行权限
[root@lb01 ~]# chmod +x check_web.sh

以上是关于负载均衡的keeplived服务的主要内容,如果未能解决你的问题,请参考以下文章

负载均衡的keeplived服务

Nginx+Keeplived+Tomcat搭建高可用/负载均衡的web服务器集群

LB(Load balance)负载均衡集群--{LVS-[NAT+DR]单实例实验+LVS+keeplived实验} 菜鸟入门级

负载均衡LVS+Keeplive

lvs和keeplived的工作原理详解

haproxy负载均衡的配置,以及haproxy+keeplived