Heartbeat+DRBD+NFS高可用案例
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Heartbeat+DRBD+NFS高可用案例相关的知识,希望对你有一定的参考价值。
9.4 部署DRBD 的需求描述
9.4.1业务需求描述
假设两台服务器Rserver-1/Lserver-1, 其实际IP分别为192.168.236.143(Rserver)和192.168.236.192(Lserver)
配置目标:两台服务器分别配置好DRBD服务后、实现在Rserver-1机器上/dev/sdb 分区上写入数据、数据会时时的同步到Lserver-1机器上面、一旦服务器Rserver-1机器宕机或硬盘损坏导致数据不可用、Lserver-1机器上的数据此时是picdata-1机器的一个完整备份、当然、不光是一个完整的备份、还可以瞬间接替坏数据或宕机的Rserver-1机器上数据的异机时时同步、从而达到数据高可用无业务影响的目的
9.4.2 DRBD部署结构图
1、 Drbd服务通过直连或以太网实时互相数据同步、
2、 两台存储服务器互相备份、正常情况下两端各提供一个主分区提供NFS使用
3、 存储服务器之间、存储服务器和交换机之间都是双千兆网卡绑定(bonding)
4、 应用服务器通过NFS访问存储
9.4.3服务主机资源规划
名称 | 接口 | IP | 用途 |
Master(Rserver-1) | Eth0 | 192.168.236.143 | 外网管理IP、用WAN转发数据转发 |
Eth1 | 172.16.1.1 | 内网管理IP,用于LAN数据转发 | |
Eth2 | 192.168.1.1 | 用于提供心跳线路连接(直连) | |
VIP | 192.168.236.10 | 用于提供应用程序A挂载服务 | |
BACKUP(Lserver-1) | Eth0 | 192.168.236.192 | 外网管理IP、用WAN转发数据转发 |
Eth1 | 172.16.1.2 | 内网管理IP,用于LAN数据转发 | |
Eth2 | 192.168.1.2 | 用于服务器间心跳连接 | |
VIP | 192.168.236.20 | 用于提供应用程序A挂载服务 |
9.4.5 drbd的环境配置
设置hosts文件两台都配置注意这里是主机名也需要改成picadata-1-1 是主机名需要改
例如:hostname picadata-1-1 如果这步没有操作启动服务的时候会出现报错。
echo ‘172.16.1.1 Rserver-1‘>>/etc/hosts
echo ‘172.16.1.2 Lserver-1‘>>/etc/hosts
[[email protected] ~]# tail -2 /etc/hosts
172.16.1.2 Rserver-1
172.16.1.1 Lserver-1
8.3.3配置服务器间心跳连接:
192.168.1.1 和192.168.1.2 两块网卡之间是通过普通网线直连连接的、即不通过交换机、直接把两块网卡连接在一起用于做心跳检测
Master:
ifconfig eth2 192.168.1.1 netmask255.255.255.0
Backup:
ifconfig eth2 192.168.1.2 netmask255.255.255.0
Rserver-1 server 上添加如下主机路由
route add –host 192.168.1.2 dev eth2
####这条命令是:从picdata-1-1server 访问192.168.1.2 走网卡eth2出去、作为心跳线路
echo ‘route add -host 192.168.1.2 deveth2‘ >>/etc/rc.local
##-à加入开机自启动配置里、这样下次启动后就会自动加载这个路由配置。
route -n
Lserver-1 server 上添加如下主机路由
route add –host 192.168.1.2 dev eth2
####这条命令是:从picdata-1-2server 访问192.168.1.2 走网卡eth2出去、作为心跳线路
echo ‘route add -host 192.168.1.1 deveth2‘ >>/etc/rc.local
##-à加入开机自启动配置里、这样下次启动后就会自动加载这个路由配置。
9.5 开始实施部署
9.5.1硬盘进行分区
首先,通过fdisk,mkfs,ext3,tune2fs 等命令、对硬盘进行分区、分区信息如下表
提示:如果生产环境中单个硬盘和raid的硬盘大于2Tfdisk 命令是查看不到的。
在虚拟机中添加两块硬盘。后面查看一下
Rserver-1查看
[[email protected] ~]# fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000486f5
DeviceBoot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 2611 20458496 8e Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
Lserver-1查看
[[email protected] ~]# fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00087dae
DeviceBoot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 2611 20458496 8e Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
9.5.2在master和backup中做分区操作(注:两台一样)
因此、我们需要做的就是对/dev/sdb 进行分区、需要分区具体内容见下表
Device | Mount point | 存储大小 | 作用 |
/dev/sdb1 | /data | 500M | 存储图片 |
/dev/sdb2 | Meta data 分区 | 300M | 存储DRBD同步状态信息 |
[[email protected] ~]# fdisk /dev/sdb
Device contains neither a valid DOSpartition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with diskidentifier 0x95767900.
Changes will remain in memory only,until you decide to write them.
After that, of course, the previouscontent won‘t be recoverable.
Warning: invalid flag 0x0000 of partitiontable 4 will be corrected by w(rite)
WARNING: DOS-compatible mode isdeprecated. It‘s strongly recommended to
switch off the mode (command ‘c‘) and change display units to
sectors (command ‘u‘).
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p ####新建一个主分区
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or+size{K,M,G} (1-2610, default 2610): +500M ####大小为500M
Command (m for help): p
Disk /dev/sdb: 21.5 GB, 21474836480bytes
255 heads, 63 sectors/track, 2610cylinders
Units = cylinders of 16065 * 512 =8225280 bytes
Sector size (logical/physical): 512bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes /512 bytes
Disk identifier: 0x95767900
Device Boot Start End Blocks Id System
/dev/sdb1 1 65 522081 83 Linux
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (66-2610, default 66):
Using default value 66
Last cylinder, +cylinders or+size{K,M,G} (66-2610, default 2610): +200M ####新建一个200M
Command (m for help): p
Disk /dev/sdb: 21.5 GB, 21474836480bytes
255 heads, 63 sectors/track, 2610cylinders
Units = cylinders of 16065 * 512 =8225280 bytes
Sector size (logical/physical): 512bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes /512 bytes
Disk identifier: 0x95767900
Device Boot Start End Blocks Id System
/dev/sdb1 1 65 522081 83 Linux
/dev/sdb2 66 91 208845 83 Linux
Command (m for help):w ######表示保存
如果提示
the kernel still uses the old table
The new table will be used at next reboot
上面这句话的意思是内核还不知道你做了分区,需要重启才能让内核知道,可以用如下命令让内核知晓
partprobe
现在查看一下分区的结果
[[email protected] ~]# fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480bytes
255 heads, 63 sectors/track, 2610cylinders
Units = cylinders of 16065 * 512 =8225280 bytes
Sector size (logical/physical): 512bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes /512 bytes
Disk identifier: 0x00087dae
Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinderboundary.
/dev/sda2 64 2611 20458496 8e Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480bytes
255 heads, 63 sectors/track, 2610cylinders
Units = cylinders of 16065 * 512 =8225280 bytes
Sector size (logical/physical): 512bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes /512 bytes
Disk identifier: 0x95767900
Device Boot Start End Blocks Id System
/dev/sdb1 1 65 522081 83 Linux
/dev/sdb2 66 91 208845 83 Linux
现在对数据分区格式化
[[email protected]Rserver-1~]# mkfs.ext4 /dev/sdb1
[[email protected]Lserver-1~]# mkfs.ext4 /dev/sdb1
[[email protected]Rserver-1~]# tune2fs -c -1 /dev/sdb1
tune2fs 1.41.12(17-May-2010)
Setting maximal mount count to -1 ####设置最大挂载次数为-1
[[email protected]Lserver-1~]# tune2fs -c -1 /dev/sdb1
tune2fs 1.41.12(17-May-2010)
Setting maximal mount count to -1 ####设置最大挂载次数为-1
9.6、安装前准备:(Rserver-1,Lserver-1)
1、关闭iptables和SELINUX,避免安装过程中报错。
# service iptables stop
# chkconfig iptables off
# setenforce 0
# vi /etc/selinux/config
---------------
SELINUX=disabled
---------------
9.6.1时间同步:
ntpdate -u asia.pool.ntp.org
9.6.2 DRBD的安装配置:
# yum install gcc gcc-c++ make glibcflex kernel-develkernel-headers 这两个的安装包一定要和uname –r 的版本一定是需要一样的。不然后面不能把drbd 加入到内核当中。可以用本地yum的方式安装。
9.6.3安装DRBD:(Rserver-1主,Lserver-1备)
# wget http://oss.linbit.com/drbd/8.4/drbd-8.4.2.tar.gz
# tar zxvf drbd-8.4.3.tar.gz
# cd drbd-8.4.3
# ./configure --prefix=/usr/local/drbd--with-km --with-heartbeat --sysconfdir=/etc/
# make KDIR=/usr/src/kernels/2.6.32-504.16.2.el6.x86_64/
# make install
# mkdir -p /usr/local/drbd/var/run/drbd
# chkconfig --add drbd
# chkconfig drbd on
2、加载DRBD模块:(Rserver-1主,Lserver-1备)
# modprobe drbd
查看DRBD模块是否加载到内核:
# lsmod |grep drbd
drbd 310172 4
libcrc32c 1246 1 drbd
3、参数配置:(Rserver-1主,Lserver-1备)
vi /etc/drbd.conf
清空文件内容,并添加如下配置:
resource r0{
protocol C;
startup { wfc-timeout 0;degr-wfc-timeout 120;}
disk { on-io-error detach;}
net{
timeout 60;
connect-int 10;
ping-int 10;
max-buffers 2048;
max-epoch-size 2048;
}
syncer { rate 200M;}
on Rserver-1{ #######on 后面是主机名
device /dev/drbd0; #####指定的是一个drbd一个盘
disk /dev/sdb1; #####本地磁盘。就是上面分区好的硬盘
address 172.16.1.1:7788; ######内网IP
meta-disk internal;
}
on Lserver-1{
device /dev/drbd0;
disk /dev/sdb1;
address 172.16.1.2:7788;
meta-disk internal;
}
}
注:请修改上面配置中的主机名、IP、和disk为自己的具体配置
4,创建DRBD设备并激活r0资源:(Rserver-1主,Lserver-1备)
# mknod /dev/drbd0 b 147 0
# drbdadm create-md r0
等待片刻,显示success表示drbd块创建成功
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfullycreated.
--== Creating metadata ==--
As with nodes, we count the total numberof devices mirrored by DRBD
at http://usage.drbd.org.
The counter works anonymously. Itcreates a random number to identify
the device and sends that random number,along with the kernel and
DRBD version, to usage.drbd.org.
http://usage.drbd.org/cgi-bin/insert_usage.pl?
nu=716310175600466686&ru=15741444353112217792&rs=1085704704
* If you wish to opt out entirely,simply enter ‘no‘.
* To continue, just press [RETURN]
success
再次输入该命令:
# drbdadm create-md r0
成功激活r0
[need to type ‘yes‘ to confirm] yes
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfullycreated.
5、启动DRBD服务:(Rserver-1主,Lserver-1备)
servicedrbd start
注:需要主从共同启动方能生效
6、查看状态:(Rserver-1主,Lserver-1备)
# service drbd status
drbd driver loaded OK; device status:
version: 8.4.3(api:1/proto:86-101)
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected],2015-05-12 21:05:41
m:res cs ro ds p mounted fstype
0:r0 Connected Secondary/Secondary Inconsistent/Inconsistent C
这里ro:Secondary/Secondary表示两台主机的状态都是备机状态,ds是磁盘状态,显示的状态内容为“Inconsistent不一致”,这是因为DRBD无法判断哪一方为主机,应以哪一方的磁盘数据作为标准。
7、将drbd1.example.com主机配置为主节点(Rserver-1)
# drbdsetup /dev/drbd0 primary --force
# service drbd status
drbd driver loaded OK; device status:
version: 8.4.3(api:1/proto:86-101)
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2017-05-1813:40:26
m:res cs ro ds p mounted fstype
0:r0 Connected Primary/Secondary UpToDate/UpToDate C
(Lserver-1)备
# service drbd status
drbd driver loaded OK; device status:
version: 8.4.3(api:1/proto:86-101)
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2017-05-1813:38:57
m:res cs ro ds p mounted fstype
0:r0 Connected Secondary/Primary UpToDate/UpToDate C
ro在主从服务器上分别显示 Primary/Secondary和Secondary/Primary
ds显示UpToDate/UpToDate
表示主从配置成功。
9、挂载DRBD:(Rserver-1)主
从刚才的状态上看到mounted和fstype参数为空,所以我们这步开始挂载DRBD到系统目录/store
# mkfs.ext4 /dev/drbd0
# mkdir /data
# mount /dev/drbd0 /data
注:Secondary节点上不允许对DRBD设备进行任何操作,包括挂载;所有的读写操作只能在Primary节点上进行,只有当Primary节点挂掉时,Secondary节点才能提升为Primary节点,并自动挂载DRBD继续工作。
成功挂载后的DRBD状态:(Rserver-1主)
[[email protected]~]# service drbd status
drbddriver loaded OK; device status:
version:8.4.2 (api:1/proto:86-101)
GIT-hash:7ad5f850d711223713d6dcadc3dd48860321070c build by [email protected], 2017-05-1813:40:26
m:res cs ro ds p mounted fstype
0:r0 Connected Primary/Secondary UpToDate/UpToDate C /data ext4
9.7配置heartbeat服务
yum install heartbeat –y
9.7.1配置ha.cf
cd/usr/share/doc/heartbeat-3.0.4
ll|grep ha.cfauthkeys haresources
8.4.2.1配置ha.cf 文件
debugfile/var/log/ha-debug
logfile/var/log/ha-log
logfacility local0
####-à以上三行为日志的配置、在你配置时一般不需要改动、
keepalive 2
deadtime 30
warntime 10
initdead 120
###-à以上四行为一些基础的参数、在你配置时一般不需要改动
# serial serialportname ...
mcast eth2225.0.0.219 694 1 0
##-à此行表示使用多播的方式、需要改动的仅有eth2 改成你的心跳线的网卡
auto_failbackon
node Rserver-1 ##-à两台存储server的主机名
node Lserver-1 ##-à两台存储server的主机名
crm no
9.7.2配置authkeys
auth 3
#1 crc
#2 sha1 HI!
3 md5 Hello!
authkey文件必须为600 权限。Authkey文件中已经说明了需要配置600 权限
# Authentication file. Must be mode 600
9.7.3配置haresources
添加一行文件
Rserver-1IPaddr::172.16.1.10/24/eth1 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext4 killnfsd
注:该文件内IPaddr,Filesystem等脚本存放路径在/etc/ha.d/resource.d/下,也可在该目录下存放服务启动脚本(例如:mysql,www),将相同脚本名称添加到/etc/ha.d/haresources内容中,从而跟随heartbeat启动而启动该脚本。
IPaddr::192.168.0.190/24/eth0:用IPaddr脚本配置对外服务的浮动虚拟IP
drbddisk::r0:用drbddisk脚本实现DRBD主从节点资源组的挂载和卸载
Filesystem::/dev/drbd0::/store::ext4:用Filesystem脚本实现磁盘挂载和卸载
Killnfsd这个为控制nfs启动的脚本
9.7.4、编辑脚本文件killnfsd,用来重启NFS服务:(Rserver-1,Lserver-1)
# vi/etc/ha.d/resource.d/killnfsd
killall -9nfsd; /etc/init.d/nfs restart;exit 0
赋予755执行权限:
# chmod 755/etc/ha.d/resource.d/killnfsd
9.7.5、启动HeartBeat服务
在两个节点上启动HeartBeat服务,先启动(Rserver-1):(Rserver-1,Lserver-1)
# serviceheartbeat start
# chkconfigheartbeat on
现在从其他机器能够ping通虚IP 172.16.1.10,表示配置成功
9.7.6、配置NFS: (Rserver-1,Lserver-1)
编辑exports配置文件,添加以下配置:
# vi/etc/exports
/data *(rw,no_root_squash)
9.7.7重启NFS服务:
# servicerpcbind restart
# service nfsrestart
# chkconfigrpcbind on
# chkconfig nfsoff
注:这里设置NFS开机不要自动运行,因为/etc/ha.d/resource.d/killnfsd 该脚本会控制NFS的启动。
9.8、测试高可用
9.8.1、正常热备切换
在客户端挂载NFS共享目录
# mount -t nfs 172.16.1.10:/store/tmp
模拟将主节点的heartbeat Rserver-1主节点服务停止
,则备节点Lserver-1备节点会立即无缝接管;
测试客户端挂载的NFS共享读写正常。
此时备机(Lserver-1备)上的DRBD状态:
如果备上面的状态成为primary 就表示已经切换成功。
9.8.2、异常宕机切换
首先把服务和IP全部切换回主上去。后面直接关闭主的电源
[[email protected]]# /etc/init.d/heartbeat start
Starting High-Availabilityservices: INFO: Resource is stopped
Done.
[[email protected]]# ip addr list
1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000
link/ether 00:0c:29:20:dc:da brdff:ff:ff:ff:ff:ff
inet 192.168.236.143/24 brd 192.168.236.255scope global eth0
inet 192.168.236.10/24 brd 192.168.236.255scope global secondary eth0
inet6 fe80::20c:29ff:fe20:dcda/64 scopelink
valid_lft forever preferred_lft forever
3: eth1:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000
link/ether 00:0c:29:20:dc:e4 brdff:ff:ff:ff:ff:ff
inet 172.16.1.1/24 brd 172.16.1.255 scopeglobal eth1
inet 172.16.1.10/24 brd 172.16.1.255 scopeglobal secondary eth1
inet6 fe80::20c:29ff:fe20:dce4/64 scopelink
valid_lft forever preferred_lft forever
4: eth2:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000
link/ether 00:0c:29:20:dc:ee brdff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 brd 192.168.1.255 scopeglobal eth2
inet6 fe80::20c:29ff:fe20:dcee/64 scopelink
valid_lft forever preferred_lft forever
5: pan0:<BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 6e:5d:75:f7:48:77 brdff:ff:ff:ff:ff:ff
[[email protected]]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_rserver1-lv_root 18650424 4093320 13609700 24% /
tmpfs 372156 76 372080 1% /dev/shm
/dev/sda1 495844 34853 435391 8% /boot
/dev/sr0 4363088 4363088 0 100% /media/CentOS_6.5_Final
/dev/drbd0 505552 10521 468930 3% /data
已经切换成功。现在测试一下直接宕机看看能不能转换
已经关闭了电源。查看一下备的情况吧。
[[email protected]]# ip addr list
1:lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000
link/ether 00:0c:29:4d:f6:92 brdff:ff:ff:ff:ff:ff
inet 192.168.236.192/24 brd 192.168.236.255scope global eth0
inet6 fe80::20c:29ff:fe4d:f692/64 scopelink
valid_lft forever preferred_lft forever
3:eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000
link/ether 00:0c:29:4d:f6:9c brdff:ff:ff:ff:ff:ff
inet 172.16.1.2/24 brd 172.16.1.255 scopeglobal eth1
inet 172.16.1.10/24 brd 172.16.1.255 scopeglobal secondary eth1
inet6 fe80::20c:29ff:fe4d:f69c/64 scopelink
valid_lft forever preferred_lft forever
4:eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000
link/ether 00:0c:29:4d:f6:a6 brdff:ff:ff:ff:ff:ff
inet 192.168.1.2/24 brd 192.168.1.255 scopeglobal eth2
inet6 fe80::20c:29ff:fe4d:f6a6/64 scopelink
valid_lft forever preferred_lft forever
5:pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 92:be:67:20:6e:b6 brdff:ff:ff:ff:ff:ff
[[email protected]]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_lserver1-lv_root 18650424 3966516 13736504 23% /
tmpfs 372156 224 371932 1% /dev/shm
/dev/sda1 495844 34856 435388 8% /boot
/dev/sr0 4363088 4363088 0 100% /media/CentOS_6.5_Final
/dev/drbd0 505552 10521 468930 3% /data
客户端检查一下
如上图显示heartbeat+DRBD+NFS已经搭建成功。
本文出自 “小梁” 博客,请务必保留此出处http://9861015.blog.51cto.com/9851015/1939521
以上是关于Heartbeat+DRBD+NFS高可用案例的主要内容,如果未能解决你的问题,请参考以下文章