Linux 集群的deartbeat与drbd服务

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Linux 集群的deartbeat与drbd服务相关的知识,希望对你有一定的参考价值。

集群的deartbeatdrbd服务

我们用到的集群系统主要就2种:

高可用(High Availability)HA集群, 使用Heartbeat实现;也会称为”双机热备”, “双机互备”, “双机”。
负载均衡群集(Load Balance Cluster),使用Linux Virtual Server(LVS)实现;

heartbeat (Linux-HA)的工作原理:heartbeat最核心的包括两个部分,心跳监测部分和资源接管部分,心跳监测可以通过网络链路和串口进行,而且支持冗 余链路,它们之间相互发送报文来告诉对方自己当前的状态,如果在指定的时间内未受到对方发送的报文,那么就认为对方失效,这时需启动资源接管模块来接管运 行在对方主机上的资源或者服务。

 

需要安装的包:

heartbeat-3.0.4-2.el6.x86_64.rpm        

heartbeat-libs-3.0.4-2.el6.x86_64.rpm

heartbeat-devel-3.0.4-2.el6.x86_64.rpm  

ldirectord-3.9.5-3.1.x86_64.rpm

 

步骤:

一。将rpm包安装在server1server2的一个目录中,

server1中切到刚才的rpm包所在的目录中,安装

yum install * -y##安装所有的rpm

[[email protected] heartbeat]# cd /etc/ha.d/

[[email protected] ha.d]# cp /usr/share/doc/heartbeat-3.0.4/{authkeys,ha.cf,haresources} .

[[email protected] ha.d]# ls

authkeys  ha.cf  harc  haresources  rc.d  README.config  resource.d  shellfuncs

[[email protected] ha.d]# vim ha.cf

48 keepalive 2

56 deadtime 30

71 initdead 60

76 udpport 12345

91 bcast   eth0            # Linux

113 #mcast eth0 225.0.0.1 694 1 0

121 #ucast eth0 1

157 auto_failback on

211 node    server12-10.example.com

212 node    server12-20.example.com

220 ping 172.25.50.250

253 respawn hacluster /usr/lib64/heartbeat/ipfail

259 apiauth ipfail gid=haclient uid=hacluster

 

[[email protected] ha.d]# vim authkeys

 23 auth 1

 24 1 crc

 25 #2 sha1 HI!

 26 #3 md5 Hello!

 

[[email protected] ha.d]# vim haresources

150 server1.example.com IPaddr::172.25.50.100/24/eth0 httpd

 

[[email protected] ha.d]# chmod 600 authkeys

[[email protected] ha.d]# ll -d authkeys

-rw------- 1 root root 643 2月  17 15:06 authkeys

[[email protected] ha.d]# scp ha.cf haresources authkeys 172.25.50.20:/etc/ha.d/

[email protected]‘s password:

ha.cf                                                100%   10KB  10.3KB/s   00:00    

haresources                                          100% 5961     5.8KB/s   00:00    

authkeys                                             100%  643     0.6KB/s   00:00    

[[email protected] ha.d]# /etc/init.d/heartbeat start

Starting High-Availability services: INFO:  Resource is stopped

Done.

 

server2上启动heartbeat服务

[[email protected] ha.d]# /etc/init.d/heartbeat start

Starting High-Availability services: INFO:  Resource is stopped

Done.

 

测试:

server1var/www/html/目录下:

[[email protected] html]# vim index.html

www.server1.example

server2

[[email protected] ha.d]# cd /var/www/html/

[[email protected] html]# ls

[[email protected] html]# vim index.html

www.server2.example.com

 

server1

[[email protected] html]# ip addr show

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 52:54:00:06:13:fa brd ff:ff:ff:ff:ff:ff

    inet 172.25.50.10/24 brd 172.25.50.255 scope global eth0

    inet 172.25.50.100/24 brd 172.25.50.255 scope global secondary eth0

    inet6 fe80::5054:ff:fe06:13fa/64 scope link

       valid_lft forever preferred_lft forever

说明http服务是在servre1上启动的

在真机上

[[email protected] Desktop]# curl 172.25.50.100

www.server1.example

##访问的内容是在ip100的主机的http默认发布目录上的内容

 

Server1上:关闭心跳

 

[[email protected] html]# /etc/init.d/heartbeat stop

Stopping High-Availability services: Done.

这时侯。100这个ip就到server2这个主机上

 

真机:

[[email protected] Desktop]# curl 172.25.50.100

www.server2.example.com

server1上的heartbeat服务启动后,100这个ip就重新回切到server1主机上

 

 

测试2:关闭httpd服务

(开启heartbeat服务,)

[[email protected] ha.d]# /etc/init.d/httpd stop

Stopping httpd:                                            [  OK  ]

 

[[email protected] ha.d]# ip addr show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 52:54:00:51:aa:19 brd ff:ff:ff:ff:ff:ff

    inet 172.25.50.10/24 brd 172.25.12.255 scope global eth0

    inet 172.25.50.100/24 brd 172.25.12.255 scope global secondary eth0

    inet6 fe80::5054:ff:fe51:aa19/64 scope link

       valid_lft forever preferred_lft forever

[[email protected] Desktop]# curl 172.25.12.100

curl: (7) Failed connect to 172.25.12.100:80; Connection refused

[[email protected] Desktop]# arp -an | grep 172.25.12.100

? (172.25.12.100) at 52:54:00:51:aa:19 [ether] on br0

 

##############drbd服务####################

首先先生成rpm

Server1

解压事先发送到server1上的bdrm-8.4.2.tar.gz

[[email protected] mnt]# tar zxf bdrm-8.4.2.tar.gz

[[email protected] mnt]# cd drbd-8.4.2

[[email protected] drbd-8.4.2]# yum install gcc -y

[[email protected] drbd-8.4.2]# yum install flex -y

[[email protected] drbd-8.4.2]yum install rpm-build -y

[[email protected] drbd-8.4.2]yum install kernel-devel -y

[

 

[email protected] drbd-8.4.2]./configure --enable-spec

生成drbd.spec这个文件

[[email protected] drbd-8.4.2]./configure --enable-spec --with-km

生成drbd-km.spec 文件

[[email protected] drbd-8.4.2]cp /mnt/drbd-8.4.2.tar.gz /root/rpmbuild/SOURCES/

[[email protected] drbd-8.4.2]rpmbuild -bb drbd.spec#编译drbd.spec文件

[[email protected] drbd-8.4.2]# rpmbuild -bb drbd-km.spec#编译drbd-km.spec 文件

[[email protected] drbd-8.4.2]cd /root/rpmbuild/RPMS/x86_64/

[[email protected] drbd-8.4.2]# cd /root/rpmbuild/RPMS/x86_64/

[[email protected] x86_64]# ls

drbd-8.4.2-2.el6.x86_64.rpm

drbd-bash-completion-8.4.2-2.el6.x86_64.rpm

drbd-heartbeat-8.4.2-2.el6.x86_64.rpm

drbd-km-2.6.32_431.el6.x86_64-8.4.2-2.el6.x86_64.rpm

drbd-pacemaker-8.4.2-2.el6.x86_64.rpm

drbd-udev-8.4.2-2.el6.x86_64.rpm

drbd-utils-8.4.2-2.el6.x86_64.rpm

drbd-xen-8.4.2-2.el6.x86_64.rpm

 

 rpm -ivh *

 scp *  172.25.30.2:  --> 然后在 server2 上执行: cd  --> rpm -ivh *

安装以上软件后就可以作接下来的实验拉

 

############存储#####

server1server2上同时添加一块儿4G虚拟硬盘

server1 sevrer2上执行fdisk -l 查看添加磁盘路径,这里是/dev/vda

[[email protected] drbd.d]# fdisk -l

Disk /dev/vda: 4294 MB, 4294967296 bytes

16 heads, 63 sectors/track, 8322 cylinders

Units = cylinders of 1008 * 512 = 516096 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

切入: cd /etc/drbd.d/

[[email protected] drbd.d]# ls

global_common.conf

 

[[email protected] drbd.d]# vim lyitx.res

resource lyitx {

meta-disk internal;

device /dev/drbd1;

syncer {

verify-alg sha1;

}

on server1.example.com {

disk /dev/vda;

address 172.25.50.10:7789;

}

on server2.example.com {

disk /dev/vda;

address 172.25.50.20:7789;

}

}

[[email protected] drbd.d]# scp lyitx.res 172.25.50.20:/etc/drbd.d/#复制到这个文件夹

[[email protected] drbd.d]# drbdadm create-md lyitx#server1server2上同时执行

Writing meta data...

initializing activity log

NOT initializing bitmap

New drbd meta data block successfully created.

[[email protected] drbd.d]# /etc/init.d/drbd start

Starting DRBD resources: [

     create res: lyitx

   prepare disk: lyitx

    adjust disk: lyitx

     adjust net: lyitx

]

..........

***************************************************************

 DRBD‘s startup script waits for the peer node(s) to appear.

 - In case this node was already a degraded cluster before the

   reboot the timeout is 0 seconds. [degr-wfc-timeout]

 - If the peer was available before the reboot the timeout will

   expire after 0 seconds. [wfc-timeout]

   (These values are for resource ‘lyitx‘; 0 sec -> wait forever)

 To abort waiting enter ‘yes‘ [  14]:

.[[email protected] drbd.d]# cat /proc/drbd #查看该主机是否能够挂载

version: 8.4.2 (api:1/proto:86-101)

GIT-hash: 7ad5f850d711223713d6dcadc3dd48860321070c build by [email protected], 2017-02-17 16:28:52

 

 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:4194140

[[email protected] drbd.d]# drbdadm primary lyitx --force#强制将当前主机设置primary模式

[[email protected] drbd.d]# cat /proc/drbd

 cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n-

[[email protected] drbd.d]# mkfs.ext4 /dev/drbd1#格式化成ext4,否则无法挂载,

[[email protected] drbd.d]# cd /mnt/

[[email protected] mnt]# mount /dev/drbd1 /mnt#挂载

[[email protected] /]# cd /mnt

[[email protected] mnt]# ls

lost+found

[[email protected] mnt]# vim index.html#写的一个测试页面

[[email protected] mnt]# ls

index.html  lost+found

[[email protected] ~]# cd

[[email protected] ~]# umount /mnt#卸载

[[email protected] ~]# drbdadm secondary lyitx#将当前主机设置成secondary模式,只有这样其他主机才能够设置成primary

 

servre2

[[email protected] drbd.d]# drbdadm primary lyitx#设置成primary模式

[[email protected] drbd.d]# cat /proc/drbd

1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---

[[email protected] drbd.d]# mount /dev/drbd1 /mnt#挂载,这里就无需格式化,

[[email protected] drbd.d]# cd /mnt

[[email protected] mnt]# ls

index.html  lost+found

[[email protected] mnt]# cat index.html

server1.example.com

server2上看到server1的测试页面,说明测试成功

 

 

 

#######heartbeat+mysql###################

heartbeat服务和mysql服务结合在一起,通过heartbeat服务实现双机热备,

步骤:

关闭两台虚拟机的 heartbeat服务

[[email protected] mnt]# yum install mysql-server -y

[[email protected] mnt]#drbdadm primary example

[[email protected] mnt]mount /dev/drbd1 /mnt

[[email protected] mnt]cd /var/lib/mysql/

/etc/init.d/mysqld start

/etc/init.d/mysqld stop

[[email protected] mysql]# cp -r * /mnt/

[[email protected] mysql]# cd /mnt/

[[email protected] mnt]# ls

ibdata1  ib_logfile0  ib_logfile1  index.html  lost+found  mysql  mysql.sock  test

[[email protected] mnt]# rm -fr mysql.sock #开启mysql服务后,会生成mysql.sock缓存文件,关闭服务后删除这个文件。(当mysql启动时,会再次生成)

 

[[email protected] mnt]# cd

[[email protected] ~]# umount /mnt/ ##这时mysql服务必须关闭,否则无法卸载

[[email protected] ~]# mount /dev/drbd1 /var/lib/mysql/

[[email protected] ~]# chown mysql.mysql /var/lib/mysql/ -R

[[email protected] mysql]# /etc/init.d/mysqld start

正在启动 mysqld:                                          [确定]

[[email protected] mysql]# cd

[[email protected] ~]# /etc/init.d/mysqld stop

停止 mysqld:                                              [确定]

[[email protected] ~]# umount /var/lib/mysql/

[[email protected] ~]# drbdadm secondary lyitx

 

server2

[[email protected] /]# yum install mysql-server -y

[[email protected] /]# drbdadm primary lyitx

[[email protected] /]# mount /dev/drbd1 /var/lib/mysql/

[[email protected] /]# df##这时能够看到挂载的mysql

Filesystem                   1K-blocks    Used Available Use% Mounted on

/dev/mapper/VolGroup-lv_root  19134332 1106972  17055380   7% /

tmpfs                           510200       0    510200   0% /dev/shm

/dev/sda1                       495844   33458    436786   8% /boot

/dev/drbd1                     4128284   95192   3823388   3% /var/lib/mysql

[[email protected] /]# /etc/init.d/mysqld start

[[email protected] /]# /etc/init.d/mysqld stop

[[email protected] /]# umount /var/lib/mysql/

[[email protected] /]# drbdadm secondary lyitx

 

[[email protected] ~]# vim /etc/ha.d/haresources

在最后一行修改:

server1.example.com IPaddr::172.25.50.100/24/eth0 drbddisk::lyitx Filesystem::/dev/drbd1::/var/lib/mysql::ext4 mysqld

[[email protected] ~]# scp /etc/ha.d/haresources 172.25.50.20:/etc/ha.d/

[email protected]‘s password:

haresources                                                       100% 6023     5.9KB/s   00:00    

[[email protected] ~]# /etc/init.d/heartbeat start

Starting High-Availability services: INFO:  Resource is stopped

Done.

[[email protected]2 ~]# /etc/init.d/heartbeat start

Starting High-Availability services: INFO:  Resource is stopped

Done.

 

测试:server1: /etc/init.d/heartbeat stop --> server2上进行 df 查看,若查看到,则表示成功


以上是关于Linux 集群的deartbeat与drbd服务的主要内容,如果未能解决你的问题,请参考以下文章

Linux集群系列——分布式复制块设备drbd的基础概念及配置

DRBD实现文件同步详解

mfs3.0.85+heartbeat+drbd集群高可用实现

Linux-网络RAID技术DRBD

heartbeat v1(CRM)+DRBD实现数据库服务器高可用集群搭建

企业~heartbeat&drbd