@中期架构搭建 - - lnmp+keepalived+ 显示error页面
Posted FikL-09-19
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了@中期架构搭建 - - lnmp+keepalived+ 显示error页面相关的知识,希望对你有一定的参考价值。
作业 – HTTPS全站lnmp+keepalived+显示error页面实现整站https
- 部署discuz
- 实现lb01和lb02故障转移
- 错误页面显示
项目设计:
项目周期 | 7天 |
---|---|
项目需求 | **1.**搭建一个LNMP架构网站 2.实现实时备份 **3.**实现全站https 4.当一台lb服务器宕机,不影响整个服务 **5.****优雅的跳转到错误页面 |
架构图
http://assets.processon.com/chart_image/605ade737d9c08555e528b73.png
| 环境准备
主机 | 内网IP | 外网IP | 身份 | 条件 |
---|---|---|---|---|
web01 | 172.16.1.7 | web服务器 | 关闭selinux和防火墙 | |
web02 | 172.16.1.8 | web服务器 | - | |
backup | 172.16.1.41 | rsync服务器 | - | |
nfs | 172.16.1.31 | nfs服务器 | - | |
lb01 | 172.16.1.5 | 192.168.15.5 | 负载均衡 | - |
lb02 | 172.16.1.6 | 192.168.15.6 | 负载均衡 | - |
db01 | 172.16.1.51 | 数据库 | 关闭selinux和防火墙 |
统一用户和关闭防火墙和selinux
## 1.在xshell中工具/发送统一用户/以下内容 # 任意一台输入即可
[root@backup ~] groupadd www -g 666
[root@backup ~] useradd www -u 666 -g 666
## 2.在xshell中工具/发送统一用户/以下内容 # 任意一台输入即可
[root@backup ~] systemctl disable --now firewalld
[root@backup ~] setenforce 0
[root@backup ~] sed -i '/^SELINUX=/c SELINUX=disabled' /etc/selinux/config
一、backup 服务器
## 1、backup机器安装并配置rsync
[root@backup ~] yum install -y rsync
[root@backup ~]# vim /etc/rsyncd.conf
uid = www
gid = www
port = 873
fake super = yes
use chroot = no
max connections = 200
timeout = 200
ignore errors
read only = false
list = true
auth users = rsync_mm
secrets file = /etc/rsync.passwd
log file = /var/log/rsyncd.log
#####################################
[data]
comment = "weclome to database "
path = /data
[backup]
comment = "weclome to file"
path = /backup
[database]
comment = "weclome to "
path = /database
## 2、创建密码文件并授权
[root@backup ~] echo "rsync_mm:123" > /etc/rsync.passwd
[root@backup ~] chmod 600 /etc/rsync.passwd # rsync服务的密码授权
## 3、创建真实目录并授权
[root@backup ~] mkdir -p /{data,database,backup}
[root@backup ~] chown -R www.www /data*
[root@backup ~] chown -R www.www /backup
## 4、启动rsync服务
[root@backup ~] systemctl enable --now rsyncd
[root@backup ~] ps -ef |grep rsyncd # 查看rsyn服务是否启动
root 25733 1 0 15:24 ? 00:00:00 /usr/bin/rsync --daemon --nodetach
二、nfs 服务器
## 1、安装NFS、rpcbind服务
[root@nfs ~] yum install -y nfs-utils rpcbind
### 2、配置NFS挂载点
[root@nfs data_conf]# cat /etc/exports
/data_wp 172.16.1.0/24(rw,sync,all_squash,anonuid=666,anongid=666)
/data_mm 172.16.1.0/24(rw,sync,all_squash,anonuid=666,anongid=666)
/data_conf 172.16.1.0/24(rw,sync,all_squash,anonuid=666,anongid=666)
## 3、创建目录并授权
[root@nfs ~] mkdir -p /{data_wp,data_mm,data_conf}
[root@nfs ~] chown -R www.www /data*
## 4、启动nfs服务并验证配置
[root@nfs ~] systemctl enable --now nfs rpcbind # rpcbind Cenots 7 默认已经启动且安装过
[root@nfs data_conf]# showmount -e
Export list for nfs:
/data_conf 172.16.1.0/24
/data_mm 172.16.1.0/24
/data_wp 172.16.1.0/24
[root@nfs ~] cat /var/lib/nfs/etab # 另一种测试是否启动
[root@nfs data_conf]# cat /var/lib/nfs/etab
# 5、sersync实时同步到backup
## 1.上传sersync包并解压到指定目录
[root@nfs ~]# rz
-rw-r--r-- 1 root root 727290 Apr 17 17:40 sersync2.5.4_64bit_binary_stable_final.tar.gz
[root@nfs ~]# tar xf sersync2.5.4_64bit_binary_stable_final.tar.gz -C /usr/local/
[root@nfs local]# mv GNU-Linux-x86 sersync2
[root@nfs sersync2]# ll
total 1772
-rwxr-xr-x 1 root root 2210 May 11 21:55 confxml.xml
-rwxr-xr-x 1 root root 1810128 Oct 26 2011 sersync2
## 2.更改sersync的confxml.xml配置文件
[root@nfs sersync2]# cat confxml.xml
...
<inotify>
<delete start="true"/>
<createFolder start="true"/>
<createFile start="true"/>
<closeWrite start="true"/>
<moveFrom start="true"/>
<moveTo start="true"/>
<attrib start="true"/>
<modify start="true"/>
</inotify>
<sersync>
<localpath watch="/data_wp">
<remote ip="172.16.1.41" name="data"/>
<!--<remote ip="192.168.8.39" name="tongbu"/>-->
<!--<remote ip="192.168.8.40" name="tongbu"/>-->
</localpath>
<rsync>
<commonParams params="-az"/>
<auth start="true" users="rsync_mm" passwordfile="/etc/rsync.passwd"/>
...
## 3.修改并授权sersync的配置密码文件
[root@nfs sersync2]# vim /etc/rsync.passwd
123
[root@nfs sersync2]#chmod 600 /etc/rsync.passwd
-rw------- 1 root root 4 May 11 21:55 /etc/rsync.passwd
## 4.启动seysnc服务
[root@nfs sersync2]# ./sersync2 -dro ./confxml.xml
## 5.启动nfs和rpcbind服务
[root@nfs sersync2]# systemctl enable --now rpcbind nfs
三、db01数据库服务器
## 1、安装软件
[root@db01 ~]# yum install mariadb-server -y
## 2、启动数据库
[root@db01 ~]# systemctl enable --now mariadb
## 3、创建密码并登录数据库
[root@db01 ~]# mysqladmin -uroot password '123'
[root@db01 ~]# mysql -uroot -p123
## 4、创建对应数据库
MariaDB [(none)]> create database discuz;
Query OK, 1 row affected (0.00 sec)
## 5、创建用户并授权给数据库
MariaDB [(none)]>grant all on *.* di@'172.16.1.%' identified by '123';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> grant all on *.* to root@'172.16.1.%' identified by '123';
## 6、重载数据库
MariaDB [mysql]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
## 7、数据备份脚本
[root@db01 ~]# vim mysql_jump.sh
#!/bin/bash
DATE=`date +%F`
BACKUP="/database"
[ -d $BACKUP ]|| mkdir -p $BACKUP
cd $BACKUP
mysqldump -uroot -p123 --all-databases --single-transaction > mysql-all-${DATE}.sql
tar -czf mysql-all-${DATE}.tar.gz mysql-all-${DATE}.sql
rm -rf mysql-all-${DATE}.sql
export RSYNC_PASSWORD=123
rsync -az mysql-all-${DATE}.tar.gz rsync_mm@172.16.1.41::database
## 8、启动数据库
[root@db01 ~]# systemctl enable --now mariadb
三、配置web服务器
1.web01和web02安装官方源nginx和php和mariadb-server
## 0、在xshell中工具/发送统一用户/以下内容 # 在web集群机器 输入切记,切记
web集群机器: web01 web02
## 1、配置nginx官方源和php官方源
## nginx官方源
[root@web01 ~]# vim /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
## php官方源
[root@web01 conf.d]# cat /etc/yum.repos.d/php.repo
[php-webtatic]
name = PHP Repository
baseurl = http://us-east.repo.webtatic.com/yum/el7/x86_64/
gpgcheck = 0
## 2、安装nginx和php和mariadb
[root@web01 conf.d]# yum remove php-mysql-5.4 php php-fpm php-common -y ### 切记切记
[root@web01 conf.d] # yum -y install mariadb-server nginx
[root@web01 conf.d] # yum -y install php71w php71w-cli php71w-common php71w-devel php71w-embedded php71w-gd php71w-mcrypt php71w-mbstring php71w-pdo php71w-xml php71w-fpm php71w-mysqlnd php71w-opcache php71w-pecl-memcached php71w-pecl-redis php71w-pecl-mongodb
## 3、更改nginx和php配置文件
[root@web01 conf.d]# vim /etc/nginx/nginx.conf
user www; # 修改用户
worker_processes auto;
worker_cpu_affinity auto; # cpu亲和优化
http {
client_max_body_size 200m; #上传文件大小优化200m
access_log /var/log/nginx/access.log main;
charset utf8; # 字符集
...
[root@web01 conf.d]# vim /etc/php-fpm.d/www.conf
...
user = www # 修改用户
group = www # 修改用户组
...
[root@web01 ~]# vim /etc/php.ini
...
upload_max_filesize = 200M #上传文件大小优化200m
post_max_size = 200M #上传文件大小优化200m
...
## 4、启动nginx和php
[root@web01 conf.d] # systemctl enable --now nginx php-fpm
2.web集群搭建discuz论坛
## 0、在xshell中工具/发送统一用户/以下内容 # 在web集群机器 输入切记,切记
web集群机器: web01 web02
## 1、web01与web02机器检查nfs挂载点
[root@web01 conf.d]# showmount -e 172.16.1.31
Export list for 172.16.1.31:
/data_conf 172.16.1.0/24
/data_mm 172.16.1.0/24
/data_wp 172.16.1.0/24
[root@wb02 ~]# showmount -e 172.16.1.31
Export list for 172.16.1.31:
/data_conf 172.16.1.0/24
/data_mm 172.16.1.0/24
/data_wp 172.16.1.0/24
## 2、创建站点目录
[root@web01 ~]# mkdir /mm
## 3、创建站点目录
[root@web01 ~]# mkdir /mm/discuz
## 4、切换道站点目录文件夹
[root@pingweb01 ~]# cd /mm/
[root@pingweb01 mm]# ll
total 0
drwxr-xr-x 2 root root 6 May 14 17:30 discuz
## 5、web01机器上挂载检查nfs挂载点
[root@web01 ~]# mount -t nfs 172.16.1.31:/data_conf /etc/nginx/conf.d/
[root@web01 ~]# mount -t nfs 172.16.1.31:/data_mm /mm # 不能在mm文件下挂载,除非加f,否则在子文件下挂载
[root@web01 ~]# mount -t nfs 172.16.1.31:/data_wp /mm/discuz/upload/data/attachment/forum/
## 6、上传并解压代码包与跳转错误页面图片
[root@web01 ~]# rz
-rw-r--r--. 1 root root 10829853 Dec 7 12:04 Discuz_X3.3_SC_GBK.zip
[root@web01 ~]# unzip Discuz_X3.3_SC_GBK.zip -d /mm/discuz/
[root@web01 upload]# cd /mm/discuz/upload/
[root@web01 upload]# rz
10.jpg
[root@web01 ~]# chown -R www.www /mm/discuz/
/mm/discuz/upload/data/attachment/forum/
## 7、配置discnz的nginx配置文件
[root@web01 conf.d]# vim linux12mm.discuz.https.com.conf
server {
listen 80;
server_name linux12mm.discuz.com;
root /mm/discuz/upload;
location / {
index index.php;
error_page 404 403 /10.jpg; #保证/mm/discuz/upload/下有10.jpg图片
}
location ~* \\.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS on; #开启https模式
include fastcgi_params;
}
}
## 7、创建https假证书(单台web机器创建,否则证书不一样,然后创建好后直接推送就行)
[root@web01 ~]# mkdir /etc/nginx/ssl_key
[root@web01 ~]# cd /etc/nginx/ssl_key/
## 注: --with-http_ssl_module -- nginx-V下的模块
[root@web01 ssl_key]# openssl genrsa -idea -out server.key 2048 # 最少密码4位
[root@web01 ssl_key]# openssl req -days 36500 -x509 -sha256 -nodes -newkey rsa:2048 -keyout server.key -out server.crt
## 注: 一路回车即可
## 8、HTTPS访问的话以下这2个文件必须有
[root@web01 nginx]# cd ssl_key/
[root@web01 ssl_key]# ll
total 8
-rw-r--r-- 1 root root 1249 May 8 19:18 server.crt
-rw-r--r-- 1 root root 1704 May 8 19:18 server.key
## 9、web01机器nginx -t检查并重启
[root@web01 conf.d] # systemctl en nginx php-fpm
# xshell两台同步执行,否则web02机器是什么都没有了 (工具/xshell发送所有会话)
# 必须保证web01单台HTTPS可以访问,否则负载均衡就不能实现
3、web02机器
## 0、web01服务端推送都web02
## 1、web0机器推送配置证书到web02
[root@web01 ~]# scp -r /etc/nginx/ssl_key 172.16.1.8:/etc/nginx/
## 2、web02机器上查看证书
[root@web02 mm]# ll /etc/nginx/ssl_key/
-rw-r--r-- 1 root root 1249 May 8 19:25 server.crt
-rw-r--r-- 1 root root 1704 May 8 19:25 server.key
## 3、web02机器上检查是否与web01机器挂载的内容一样
[root@web02 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/centos_mm-root 103754244 2359108 101395136 3% /
devtmpfs 485828 0 485828 0% /dev
tmpfs 497948 0 497948 0% /dev/shm
tmpfs 497948 7764 490184 2% /run
tmpfs 497948 0 497948 0% /sys/fs/cgroup
/dev/sda1 1038336 135504 902832 14% /boot
tmpfs 99592 0 99592 0% /run/user/0
172.16.1.31:/data_conf 103754368 2157824 101596544 3% /etc/nginx/conf.d
172.16.1.31:/data_mm 103754368 2157824 101596544 3% /mm
172.16.1.31:/data_wp 103754368 2157824 101596544 3% /mm/discuz/upload/data/attachment/forum
## 4、web02机器 nginx -t检查并重启
[root@web02 ~]# systemctl restart nginx php-fpm
# 必须保证web02单台HTTPS可以访问,否则负载均衡就不能实现
四、db01数据库服务器
## 1、安装软件
[root@db01 ~]# yum install mariadb-server -y
## 2、启动数据库
[root@db01 ~]# systemctl enable --now mariadb
## 3、创建密码并登录数据库
[root@db01 ~]# mysqladmin -uroot password '123'
[root@db01 ~]# mysql -uroot -p123
## 4、创建对应数据库
MariaDB [(none)]> create database discuz;
Query OK, 1 row affected (0.00 sec)
## 5、创建用户并授权给数据库
MariaDB [(none)]>grant all on discuz.* to di@'172.16.1.%' identified by '123';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> grant all on *.* to root@'172.16.1.%' identified by '123';
## 6、重载数据库
MariaDB [mysql]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
## 7、数据备份脚本
[root@db01 ~]# vim mysql_jump.sh
#!/bin/bash
DATE=`date +%F`
BACKUP="/database"
[ -d $BACKUP ]|| mkdir -p $BACKUP
cd $BACKUP
mysqldump -uroot -p123 --all-databases --single-transaction > mysql-all-${DATE}.sql
tar -czf mysql-all-${DATE}.tar.gz mysql-all-${DATE}.sql
rm -rf mysql-all-${DATE}.sql
export RSYNC_PASSWORD=123
rsync -az mysql-all-${DATE}.tar.gz rsync_mm@172.16.1.41::database
## 8、启动数据库
[root@db01 ~]# systemctl enable --now mariadb
## 9、定时任务
[root@db01 ~]# crontab -l
00 00 * * * /root/mysql_jump.sh
五、配置负载均衡 lb01
负载均衡lb01和lb02机器提前准备:
## 1、web01的nginx源推送配置到lb01与lb02
[root@web01 conf.d]# scp /etc/yum.repos.d/nginx.repo 172.16.1.5:/etc/yum.repos.d/
[root@web01 conf.d]# scp /etc/yum.repos.d/nginx.repo 172.16.1.6:/etc/yum.repos.d/
## 2、lb01和lb02负载均衡机器安装nginx
[root@lb01 conf.d]# yum -y install nginx # lb01机器安装nginx
[root@lb02 conf.d]# yum -y install nginx # lb02机器安装nginx
## 3、web01机器的配置文件推送到lb01与lb02
[root@web01 ~]# scp -r /etc/nginx/nginx.conf 172.16.1.5:/etc/nginx/nginx.conf
[root@web01 ~]# scp -r /etc/nginx/nginx.conf 172.16.1.6:/etc/nginx/nginx.conf
## 4、web01机器推送配置证书lb01和lb02负载均衡机器上
[root@web01 ~]# scp -r /etc/nginx/ssl_key 172.16.1.5:/etc/nginx/
[root@web01 ~]# scp -r /etc/nginx/ssl_key 172.16.1.6:/etc/nginx/
## 5、lb01和lb02负载均衡机器上查看以上文件是否推送过来
[root@lb01 yum.repos.d]# ll
-rw-r--r-- 1 root root 378 May 14 19:34 nginx.repo
[root@lb01 yum.repos.d]# ll /etc/nginx/nginx.conf
-rw-r--r-- 1 root root 719 May 14 19:37 /etc/nginx/nginx.conf
[root@lb01 yum.repos.d]# ll /etc/nginx/ssl_key/
-rw-r--r-- 1 root root 1220 May 14 19:45 server.crt
-rw-r--r-- 1 root root 1704 May 14 19:45 server.key
[root@lb02 yum.repos.d]# ll
-rw-r--r-- 1 root root 378 May 14 19:34 nginx.repo
[root@lb02 yum.repos.d]# ll /etc/nginx/nginx.conf
-rw-r--r-- 1 root root 719 May 14 19:37 /etc/nginx/nginx.conf
[root@lb02 yum.repos.d]# ll /etc/nginx/ssl_key/
-rw-r--r-- 1 root root 1220 May 14 19:45 server.crt
-rw-r--r-- 1 root root 1704 May 14 19:45 server.key
1.负载均衡lb01配置
## 1.lb01负载均衡机器配置nginx优化文件
[root@lb01 ssl_key]# vim /etc/nginx/proxy_params
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 20s;
proxy_read_timeout 20s;
proxy_send_timeout 20s;
proxy_buffering on;
proxy_buffer_size 8k;
proxy_buffers 8 8k;
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
## 2、lb01负载均衡机器配置nginx文件
[root@lb01 conf.d]# vim linux12mm.discuz.com.conf
upstream blog {
server 172.16.1.7;
server 172.16.1.8;
}
server {
listen 80;
server_name linux12mm.discuz.com;
rewrite (.*) https://$server_name$1;
}
server {
listen 443 ssl;
server_name linux12mm.discuz.com;
ssl_certificate /etc/nginx/ssl_key/server.crt;
ssl_certificate_key /etc/nginx/ssl_key/server.key;
location / {
proxy_pass http://blog;
include proxy_params;
}
}
## 3、lb01负载均衡机器nginx -t检查并重启
[root@lb01 ~]# systemctl enable --now nginx
## 4、配置本地hosts
192.168.15.5 linux12mm.discuz.com
2、配置负载均衡 lb02
## 1、lb01负载均衡机器的优化文件推送推送到lb02
[root@lb01 conf.d]# scp /etc/nginx/proxy_params 172.16.1.6:/etc/nginx/
[root@lb02 ~]# ll /etc/nginx/proxy_params
-rw-r--r-- 1 root root 344 Apr 30 16:33 /etc/nginx/proxy_params
## 2、lb01负载均衡机器的配置文件推送推送到lb02
[root@lb01 conf.d]# scp linux12mm.discuz.com.conf 172.16.1.6:/etc/nginx/conf.d/
[root@lb02 conf.d]# ll
-rw-r--r-- 1 root root 433 May 14 19:56 linux12mm.discuz.com.conf
## 3、lb02负载均衡机器nginx -t检查并重启
[root@lb02 ~]# systemctl enable --now nginx
## 4、配置本地hosts
192.168.15.5 linux12mm.discuz.com
192.168.15.6 linux12mm.discuz.com
# lb01和lb02相同.所以都可以访问,切记,切记 lb01,lb02相同才可以做keepalived 高可用
六、lb01和lb02负载均衡 keepalived故障转移
lb01和lb02负载均衡机器提前准备
## 1、lb01和lb02机器安装 keepalived
[root@lb01 conf.d]# yum -y install keepalived
[root@lb02 conf.d]# yum -y install keepalived
## 2、lb01和lb02查找配置文件 (不会的可用rpm -qc查找)
[root@lb01 conf.d]# rpm -qc keepalived
/etc/keepalived/keepalived.conf
/etc/sysconfig/keepalived
## 3、备份lb01和lb02的keepalived文件
[[root@lb01 keepalived]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@lb01 keepalived]# ls
-rw-r--r-- 1 root root 3598 Oct 1 2020 keepalived.conf
-rw-r--r-- 1 root root 3598 May 14 20:11 keepalived.conf.bak
[[root@lb02 keepalived]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@lb02 keepalived]# ls
-rw-r--r-- 1 root root 3598 Oct 1 2020 keepalived.conf
-rw-r--r-- 1 root root 3598 May 14 20:11 keepalived.conf.bak
## 4、备份lb01和lb02的ngixn备份文件 (不备份的话,默认会去nginx的conf.d下的第一个文件)
[root@lb01 conf.d]# mkdir /etc/nginx/conf.d/backup
[root@lb01 conf.d]# mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/backup/
[root@lb01 conf.d]# ll
drwxr-xr-x 2 root root 26 May 14 20:24 backup
-rw-r--r-- 1 root root 433 May 14 19:50 linux12mm.discuz.com.conf
[root@lb02 conf.d]# mkdir /etc/nginx/conf.d/backup
[root@lb02 conf.d]# mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/backup/
[root@lb02 conf.d]# ll
drwxr-xr-x 2 root root 26 May 14 20:24 backup
-rw-r--r-- 1 root root 433 May 14 19:50 linux12mm.discuz.com.conf
## 5、编辑lb01负载均衡机器上keepalived日志文件
# 1 编写lb01的日志文件
[root@lb01 ~]# vim /etc/sysconfig/keepalived
KEEPALIVED_OPTIONS="-D -d -S 0"
[root@lb01 ~]# vim /etc/rsyslog.conf
local0.* /var/log/keepalived.log
## 6、把lb01负载均衡机器上配置keepalived日志文件推送道lb02负载均衡机器上
[root@lb01 ~]# scp /etc/sysconfig/keepalived 172.16.1.6:/etc/sysconfig/keepalived
[root@lb01 ~]# scp /etc/rsyslog.conf 172.16.1.6:/etc/rsyslog.conf
[root@lb01 ~]# ll /etc/rsyslog.conf
-rw-r--r-- 1 root root 3312 May 14 20:52 /etc/rsyslog.conf
[root@lb01 ~]# ll /etc/sysconfig/keepalived
-rw-r--r-- 1 root root 675 May 14 20:47 /etc/sysconfig/keepalived
[root@lb02 ~]# ll /etc/rsyslog.conf
-rw-r--r-- 1 root root 3312 May 14 20:52 /etc/rsyslog.conf
[root@lb02 ~]# ll /etc/sysconfig/keepalived
-rw-r--r-- 1 root root 675 May 14 20:47 /etc/sysconfig/keepalived
2.lb01负载均衡配置keepalived (非抢占式)
## 1、配置主节点的配置文件 (非抢占式)
[root@lb01 keepalived]# vim /etc/keepalived/keepalived.conf
global_defs {
router_id lb01
}
vrrp_script check_web {
script "/root/check_web.sh"
interval 5
}
vrrp_instance VI_1 {
state BACKUP # state MASTER(抢占式 主节点)
nopreempt #删除 nopreempt
interface eth0
virtual_router_id 50
priority 100
priority 192.168.15.102
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.15.102
}
track_script {
check_web
}
}
## 2、用keepalived调用nginx切换脚本
[root@lb01 ~]# vim check_web.sh
#!/bin/sh
nginxpid=$(ps -ef | grep [n]ginx | wc -l)
if [ $nginxpid -eq 0 ];then
systemctl restart nginx &>/dev/null
sleep 3
nginxpid=$(ps -ef | grep [n]ginx | wc -l)
if [ $nginxpid -eq 0 ];then
systemctl stop keepalived
fi
fi
## 3、启动keepalived服务
[root@lb01 ~]# systemctl enable --now keepalived
## 4、配置的是keepalived非抢占式说明
1.两个节点的state都必须配置为BACKUP
2.两个节点都必须加上配置 nopreempt
3.其中一个节点的优先级必须要高于另外一个节点的优先级。
两台服务器都角色状态启用nopreempt后,必须修改角色状态统一为BACKUP,唯一的区分就是优先级。
3.lb02负载均衡配置keepalived
## 1、配置从节点的配置文件 (非抢占式)
[root@lb02 ~]# vim /etc/keepalived/keepalived.conf
global_defs {
router_id lb02
}
vrrp_script check_web {
script "/root/check_web.sh"
interval 5
}
vrrp_instance VI_1 {
state BACKUP # state MASTER(抢占式 主节点)
nopreempt #删除 nopreempt
interface eth0
virtual_router_id 50
priority 80
priority 192.168.15.102
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.15.102
}
track_script {
check_web
}
}
## 2、用keepalived调用nginx切换脚本
[root@lb02 ~]# vim check_web.sh
#!/bin/sh
nginxpid=$(ps -ef | grep [n]ginx | wc -l)
if [ $nginxpid -eq 0 ];then
systemctl restart nginx &>/dev/null
sleep 3
nginxpid=$(ps -ef | grep [n]ginx | wc -l)
if [ $nginxpid -eq 0 ];then
systemctl stop keepalived
fi
fi
## 3、启动keepalived服务
[root@lb02 ~]# systemctl enable --now keepalived
4.keepalived主从节点跳转测试
## 1、两个节点都启动时,由于节点1优先级高于节点2,所以只有节点1上有VIP,节点2为空
[root@lb01 ~]# ip addr | grep 192.168.15.102
inet 192.168.15.102/32 scope global eth0
[root@lb02 ~]# ip addr | grep 192.168.15.102
## 2、由于节点1 keepalived down掉,节点2会自动接管节点1的工作,即VIP
[root@lb01 ~]# systemctl stop keepalived.service
[root@lb01 ~]# ip addr | grep 192.168.15.102
[root@lb02 ~]# ip addr | grep 1192.168.15.102
inet 192.168.15.102/32 scope global eth0
## 3、keepalived主从节点的区别
# 1,抢占式
抢占模式为当keepalived的某台机器挂了之后VIP漂移到了备节点,当主节点恢复后主动将VIP再次抢回,keepalived默认工作在抢占模式下。
主节点MASTER,备节点BACKUP
# 2,非抢占式
非抢占模式则是当主节挂了再次起来后不再抢回VIP。
两个节点的state都必须配置为BACKUP,两个节点都必须加上配置 nopreempt。
## 4、本地hosts配置
192.168.15.5 linux12mm.discuz.com
192.168.15.6 linux12mm.discuz.com
5、keepalived 解决脑裂的脚本
## 1、当主节点和从节点都提供服务的时候(脚本探测)
## 2、访问浏览器因为开启防⽕墙,所以访问不了站点,需要配置开启http和HTPPS服务
[root@lb01 ~]# firewall-cmd --add-service=http
[root@lb01 ~]# firewall-cmd --add-service=http
[root@lb01 ~]# vi check_vrrp.sh
#!/bin/bash
# 做免密
VIP="192.168.15.102"
MASTERIP="172.16.1.6"
BACKUPIP="172.16.1.5"
while true; do
# 探测VIP
PROBE='ip a | grep "${VIP}"' #单引号
ssh ${MASTERIP} "${PROBE}" > /dev/null
MASTER_STATU=$?
ssh ${BACKUPIP} "${PROBE}" > /dev/null
BACKUP_STATU=$?
if [[ $MASTER_STATU -eq 0 && $BACKUP_STATU -eq 0 ]];then
ssh ${BACKUPIP} "systemctl stop keepalived.service"
fi &
sleep 2
done
## 3、lb01负载均衡的解决脑裂的脚本推送到lb02
[root@pinglb01 ~]# scp check_vrrp.sh 172.16.1.6:/root/
6.浏览器访问测试
192.168.15.5 linux12mm.discuz.com
192.168.15.6 linux12mm.discuz.com
192.168.15.102
## 以上三个都可以访问到linux12mm.discuz.com里面的内容,即配置成功
以上是关于@中期架构搭建 - - lnmp+keepalived+ 显示error页面的主要内容,如果未能解决你的问题,请参考以下文章
keepalived+haproxy搭建LNMP架构并做数据同步
搭建Keepalived+LNMP架构web动态博客 实现高可用与负载均衡
实现基于Keepalived+Haproxy+Varnish+LNMP企业级架构