corosync+pacemaker+drbd+mfs高可用
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了corosync+pacemaker+drbd+mfs高可用相关的知识,希望对你有一定的参考价值。
mfs是一个网络分布式文件系统,将文件存储在多个服务器,但呈现给客户端的事统一的内容。这里我的思路是将mfs的两台MASTER做一个高可用,首先先建立drbd,然后在挂载的目录上编译安装drbd,这样保证drbd能将mfsmaster的配置文件带过去。客户端挂载后,master实现切换,客户端依旧访问到同样的内容。
环境:mfsmaster:192.168.40.12
192.168.40.19
mfscheckserver:192.168.40.146
192.168.40.147
mfsclient:192.168.40.5
1、首先配置主机之间ssh互信,在每一台主机上生成ssh秘钥然后分别复制到其他主机:
ssh-keygen
ssh-copy-id IPaddr
2、设置时间同步:crontab -e
[[email protected] ~]# crontab -l */5 * * * * /usr/sbin/ntpdate 0.cn.pool.ntp.org
3、先安装好其他的软件:drbd corosync crm pacemaker pcs。
这些软件的安装和配置在之前的博文有提到。
drbd添加mfs.res:
[[email protected] drbd.d]# cd /etc/drbd.d [[email protected] drbd.d]# ls global_common.conf mfs.res [[email protected] drbd.d]# cat mfs.res resource mfs { protocol C; meta-disk internal; device /dev/drbd1; syncer { verify-alg sha1; } net { allow-two-primaries; } on centosa { disk /dev/sdb1; address 192.168.40.12:7789; } on centosb { disk /dev/sdb1; address 192.168.40.19:7789; } }
将文件复制到另一台master。
然后再装mfsmaster:
添加mfs用户,所有环境内主机的mfs用户id要一样。
解压缩安装包:
[[email protected] src]# tar -xf v3.0.96.tar.gz
然后编译安装软件:
[[email protected] src]# cd moosefs-3.0.96/ [[email protected] moosefs-3.0.96]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount [[email protected] moosefs-3.0.96]# make && make install
将配置文件改为.cfg后缀:
[[email protected] ~]# cd /usr/local/mfs/etc/mfs/ [[email protected] mfs]# cp mfsexports.cfg.sample mfsexports.cfg [[email protected] mfs]# cp mfsmaster.cfg.sample mfsmaster.cfg
修改mfsexports.cfg文件:
[[email protected] ~]# vim /usr/local/mfs/etc/mfs/mfsexports.cfg * / rw,alldirs,mapall=mfs:mfs,password=redhat
开启元数据文件:
[[email protected] ~]# cp /usr/local/mfs/var/mfs/metadata.mfs.empty /usr/local/mfs/var/mfs/metadata.mfs
启动master:
[[email protected] mfs]# /usr/local/mfs/sbin/mfsmaster start
查看mfs端口:
[[email protected] mfs]# netstat -anltp | grep mfs tcp 0 0 0.0.0.0:9419 0.0.0.0:* LISTEN 20757/mfsmaster tcp 0 0 0.0.0.0:9420 0.0.0.0:* LISTEN 20757/mfsmaster tcp 0 0 0.0.0.0:9421 0.0.0.0:* LISTEN 20757/mfsmaster
mfsmaster需要有两台,所以另一台也是一样的操作。
4、配置check servers:
添加mfs用户,并保证id一样:
[[email protected] ~]# id mfs uid=1004(mfs) gid=1004(mfs) groups=1004(mfs)
安装依赖软件:
[[email protected] ~]# yum install zlib-devel -y
解压缩软件:
[[email protected] ~]# cd /usr/local/src/ [[email protected] src]# tar -xf v3.0.96.tar.gz
编译安装mfs:
[[email protected] mfs]# cd /usr/local/src/moosefs-3.0.96/ [[email protected] moosefs-3.0.96]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster --disable-mfsmount [[email protected] moosefs-3.0.96]# make && make install
将配置文件改为.cfg后缀:
[[email protected] ~]# cd /usr/local/mfs/etc/mfs/ [[email protected] mfs]# cp mfschunkserver.cfg.sample mfschunkserver.cfg [[email protected] mfs]# cp mfshdd.cfg.sample mfshdd.cfg
修改配置文件:
[[email protected] moosefs-3.0.96]# cd /usr/local/mfs/etc/mfs [[email protected] mfs]# vim mfschunkserver.cfg MASTER_HOST = 192.168.40.100
因为要做高可用,所以把本应该指向MASTER的IP地址改为vip。
修改mfshdd.cfg文件(该文件是用来定义一个目录共享给MASTER进行管理的):
[[email protected] mfs]# vim mfshdd.cfg /mfstest
创建共享目录:
[[email protected] ~]# mkdir /mfstest
修改目录拥有者和所属组:
[[email protected] ~]# chown -R mfs:mfs /mfstest/ [[email protected] ~]# ll -d /mfstest/ drwxr-xr-x 2 mfs mfs 6 Oct 29 10:30 /mfstest/
启动check server:
[[email protected] mfs]# /usr/local/mfs/sbin/mfschunkserver start
两台check server配置一样。
5、安装mfs客户端:
添加mfs用户,然后保证id一样。
解压缩安装包:
[[email protected] src]# cd /usr/local/src/ [[email protected] src]# tar -xf v3.0.96.tar.gz
编译安装mfs:
[[email protected] src]# cd /usr/local/src/moosefs-3.0.96/ [[email protected] moosefs-3.0.96]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster --disable-mfschunkserver --enable-mfsmount [[email protected] moosefs-3.0.96]# make && make install
创建挂载文件:
[[email protected] ~]# mkdir /mfstest [[email protected] ~]# chown -R mfs:mfs /mfstest/ [[email protected] ~]# ll -d /mfstest/ drwxr-xr-x 2 mfs mfs 6 10月 24 04:15 /mfstest/
安装FUSE:
[[email protected] ~]# yum install fuse fuse-devel [[email protected] ~]# modprobe fuse
6、开始配置集群:
首先将MASTER服务停止,然后创建一个启动脚本:
[[email protected] mfs]# /usr/local/mfs/sbin/mfsmaster stop [[email protected] mfs]# cat /etc/systemd/system/mfsmaster.service [Unit] Description=mfs After=network.target [Service] Type=forking ExecStart=/usr/local/mfs/sbin/mfsmaster start ExecStop=/usr/local/mfs/sbin/mfsmaster stop PrivateTmp=true [Install] WantedBy=multi-user.target
将启动文件复制到另一台master:
[[email protected] system]# scp mfsmaster.service centosb:/etc/systemd/system/
开机启动(只有设置开机启动才能添加到集群):
[[email protected] system]# systemctl enable mfsmaster
测试用systemctl启动,没问题后再停止,由pacemaker管理:
[[email protected] system]# systemctl start mfsmaster [[email protected] system]# netstat -antlp | grep mfs tcp 0 0 0.0.0.0:9419 0.0.0.0:* LISTEN 20978/mfsmaster tcp 0 0 0.0.0.0:9420 0.0.0.0:* LISTEN 20978/mfsmaster tcp 0 0 0.0.0.0:9421 0.0.0.0:* LISTEN 20978/mfsmaster
配置集群:
首先确保drbd是停止的由集群管理:
查看集群:
[[email protected] ~]# crm crm(live)# status Stack: corosync Current DC: centosa (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum Last updated: Sun Oct 29 11:21:50 2017 Last change: Sun Oct 29 11:20:42 2017 by hacluster via crmd on centosa 2 nodes configured 0 resources configured Online: [ centosa centosb ] No resources
添加资源:
crm(live)# configure crm(live)configure# primitive mfs_drbd ocf:linbit:drbd params drbd_resource=mfs op monitor role=Master interval=10 timeout=20 op monitor role=Slave interval=20 timeout=20 op start timeout=240 op stop timeout=100 crm(live)configure# ms ms_mfs_drbd mfs_drbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
将stonith关闭,否则会verify会报错:
crm(live)configure# property stonith-enabled=false crm(live)configure# verify crm(live)configure# commit
添加挂载资源,绑定并设置先后顺序:
crm(live)configure# primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd1 directory=/usr/local/mfs fstype=xfs op start timeout=60 op stop timeout=60 crm(live)configure# colocation ms_mfs_drbd_with_mystore inf: mystore ms_mfs_drbd crm(live)configure# order mystore_after_ms_mfs_drbd Mandatory: ms_mfs_drbd:promote mystore:start crm(live)configure# verify crm(live)configure# commit
添加mfs资源:
crm(live)configure# primitive mfs systemd:mfsmaster op monitor timeout=100 interval=30 op start timeout=30 interval=0 op stop timeout=30 interval=0 crm(live)configure# colocation mfs_with_mystore inf: mfs mystore crm(live)configure# order mfs_after_mystore Mandatory: mystore mfs crm(live)configure# verify WARNING: mfs: specified timeout 30 for start is smaller than the advised 100 WARNING: mfs: specified timeout 30 for stop is smaller than the advised 100 crm(live)configure# commit
添加vip资源:
crm(live)configure# primitive vip ocf:heartbeat:IPaddr params ip=192.168.40.100 crm(live)configure# colocation vip_with_mfs inf: vip mfs crm(live)configure# order vip_after_mfs Mandatory: mfs vip crm(live)configure# verify WARNING: mfs: specified timeout 30 for start is smaller than the advised 100 WARNING: mfs: specified timeout 30 for stop is smaller than the advised 100 crm(live)configure# commit
上面一直会有一个提醒,这时候进入configure模式然后edit把mfs服务的超时时间都改为100,不然mfs如果启动或者停止过于慢,就会导致集群出错:
crm(live)# configure crm(live)configure# edit
查看状态:
crm(live)# status Stack: corosync Current DC: centosa (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum Last updated: Sun Oct 29 20:40:05 2017 Last change: Sun Oct 29 20:31:01 2017 by root via cibadmin on centosa 2 nodes configured 5 resources configured Online: [ centosa centosb ] Full list of resources: Master/Slave Set: ms_mfs_drbd [mfs_drbd] Masters: [ centosa ] Slaves: [ centosb ] mystore(ocf::heartbeat:Filesystem):Started centosa mfs(systemd:mfsmaster):Started centosa vip(ocf::heartbeat:IPaddr):Started centosa
查看挂载状态:
[[email protected] ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/cl-root 18G 3.1G 15G 17% / devtmpfs 226M 0 226M 0% /dev tmpfs 237M 54M 183M 23% /dev/shm tmpfs 237M 8.6M 228M 4% /run tmpfs 237M 0 237M 0% /sys/fs/cgroup /dev/sda1 1014M 168M 847M 17% /boot tmpfs 48M 0 48M 0% /run/user/0 /dev/drbd1 2.0G 40M 2.0G 2% /usr/local/mfs
查看vip:
[[email protected] ~]# ip addr sh ens33 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:04:e9:1a brd ff:ff:ff:ff:ff:ff inet 192.168.40.12/24 brd 192.168.40.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.40.100/24 brd 192.168.40.255 scope global secondary ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe04:e91a/64 scope link valid_lft forever preferred_lft forever
在客户端挂载资源:
[[email protected] ~]# /usr/local/mfs/bin/mfsmount /mfstest -H 192.168.40.100 -p MFS Password: mfsmaster accepted connection with parameters: read-write,restricted_ip,map_all ; root mapped to mfs:mfs ; users mapped to mfs:mfs
查看资源挂载:
[[email protected] ~]# df -h 文件系统 容量 已用 可用 已用% 挂载点 /dev/mapper/cl-root 18G 2.6G 16G 14% / devtmpfs 226M 0 226M 0% /dev tmpfs 237M 0 237M 0% /dev/shm tmpfs 237M 4.6M 232M 2% /run tmpfs 237M 0 237M 0% /sys/fs/cgroup /dev/sda1 1014M 139M 876M 14% /boot tmpfs 48M 0 48M 0% /run/user/0 192.168.40.100:9421 35G 9.2G 26G 27% /mfstest
测试写入:
[[email protected] ~]# cd /mfstest/ [[email protected] mfstest]# ls [[email protected] mfstest]# echo "111" > 1.txt [[email protected] mfstest]# ls 1.txt [[email protected] mfstest]# cat 1.txt 111
将主节点下线后(MASTER切换到另一台):
crm(live)# node standby crm(live)# status Stack: corosync Current DC: centosb (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum Last updated: Sun Oct 29 22:40:33 2017 Last change: Sun Oct 29 22:40:30 2017 by root via crm_attribute on centosa 2 nodes configured 5 resources configured Node centosa: standby Online: [ centosb ] Full list of resources: Master/Slave Set: ms_mfs_drbd [mfs_drbd] Masters: [ centosb ] Stopped: [ centosa ] mystore(ocf::heartbeat:Filesystem):Started centosb mfs(systemd:mfsmaster):Started centosb vip(ocf::heartbeat:IPaddr):Started centosb
查看另一台master挂载情况:
[[email protected] crmsh-2.3.2]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/cl-root 18G 3.1G 15G 17% / devtmpfs 226M 0 226M 0% /dev tmpfs 237M 54M 183M 23% /dev/shm tmpfs 237M 8.6M 228M 4% /run tmpfs 237M 0 237M 0% /sys/fs/cgroup /dev/sda1 1014M 168M 847M 17% /boot tmpfs 48M 0 48M 0% /run/user/0 /dev/drbd1 2.0G 40M 2.0G 2% /usr/local/mfs
客户端依旧有数据而且可写入:
[[email protected] mfstest]# echo "222" > 2.txt [[email protected] mfstest]# ls 1.txt 2.txt [[email protected] mfstest]# cat 1.txt 111 [[email protected] mfstest]# cat 2.txt 222
这里要注意的是mfs的启动超时和停止超时,因为启动时间和停止时间都比较长,一开始没改长一点,所以一直跳不过去。
本文出自 “运维小记” 博客,请务必保留此出处http://lsfandlinux.blog.51cto.com/13405754/1977263
以上是关于corosync+pacemaker+drbd+mfs高可用的主要内容,如果未能解决你的问题,请参考以下文章
corosync+pacemaker+crmsh+DRBD实现数据库服务器高可用集群构建
Corosync+Pacemaker+DRBD+NFS高可用实例配置
drbd+corosync+pacemaker构建高可用MySQL集群
基于corosync和pacemaker+drbd实现mfs高可用
Corosync+pacemaker+DRBD+mysql(mariadb)实现高可用(ha)的mysql集群(centos7)