34J-2 corosync集群初步

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了34J-2 corosync集群初步相关的知识,希望对你有一定的参考价值。

配置环境

Node1:192.168.1.131 CentOS7.2

Node2:192.168.1.132 CentOS7.2


配置前准备

[[email protected] ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.131   node1

192.168.1.132   node2

[[email protected] ~]# ssh-keygen -t rsa -P ‘‘  

[[email protected] ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub node1

[[email protected] ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub node2

[[email protected] ~]# yum -y install ansible

[[email protected] ~]# vim /etc/ansible/hosts 

[ha]

192.168.1.131

192.168.1.132


Node1和Node2

# yum -y install pcs


[[email protected] ~]# ansible ha -m service -a ‘name=pcsd state=started enabled=yes‘

[[email protected] ~]# ansible ha -m shell -a ‘echo "mageedu" | passwd --stdin hacluster‘

[[email protected] ~]# pcs cluster auth node1 node2 -u hacluster

Password: 

node1: Already authorized

node2: Already authorized

[[email protected] ~]# pcs cluster auth node1 node2 -u hacluster  

Password: 

node1: Already authorized

node2: Already authorized


[[email protected] ~]# pcs cluster setup --name mycluster node1 node2

Shutting down pacemaker/corosync services...

Redirecting to /bin/systemctl stop  pacemaker.service

Redirecting to /bin/systemctl stop  corosync.service

Killing any remaining services...

Removing all cluster configuration files...

node1: Succeeded

node2: Succeeded

Synchronizing pcsd certificates on nodes node1, node2...

node1: Success

node2: Success


Restaring pcsd on the nodes in order to reload the certificates...

node1: Success

node2: Success


[[email protected] ~]# cd /etc/corosync/

[[email protected] corosync]# vim corosync.conf

修改loggin值为

logging {

    to_logfile: yes

    logfile: /var/log/cluster/corosync.log

}   

[[email protected] corosync]# scp corosync.conf node2:/etc/corosync/


启动集群:

[[email protected] corosync]# pcs cluster start --all 

node2: Starting Cluster...

node1: Starting Cluster...


检查各节点通信状态(显示为no faults即为OK);

[[email protected] corosync]# corosync-cfgtool -s

Printing ring status.

Local node ID 1

RING ID 0

        id      = 192.168.1.131

        status  = ring 0 active with no faults


检查集群成员关系及Quorum API

[[email protected] corosync]# corosync-cmapctl | grep members

runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0

runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.1.131) 

runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1

runtime.totem.pg.mrp.srp.members.1.status (str) = joined

runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0

runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.1.132) 

runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1

runtime.totem.pg.mrp.srp.members.2.status (str) = joined


查看集群状态:

[[email protected] corosync]# pcs status


检查集群错误:

[[email protected] corosync]# crm_verify -L -V

   error: unpack_resources:     Resource start-up disabled since no STONITH resources have been defined

   error: unpack_resources:     Either configure some or disable STONITH with the stonith-enabled option

   error: unpack_resources:     NOTE: Clusters with shared data need STONITH to ensure data integrity

Errors found during check: config not valid


处理:

[[email protected] corosync]# pcs property set stonith-enabled=false

[[email protected] corosync]# crm_verify -L -V    

[[email protected] corosync]# pcs property list

Cluster Properties:

 cluster-infrastructure: corosync

 cluster-name: mycluster

 dc-version: 1.1.13-10.el7_2.4-44eb2dd

 have-watchdog: false

 stonith-enabled: false

 

 [[email protected] ~]# ls

anaconda-ks.cfg             Pictures

crmsh-2.1.4-1.1.x86_64.rpm  pssh-2.3.1-4.2.x86_64.rpm

Desktop                     Public

Documents                   python-pssh-2.3.1-4.2.x86_64.rpm

Downloads                   Templates

Music                       Videos

[[email protected] ~]# yum install *rpm

[[email protected] ~]# scp *rpm node2:/root

[[email protected] ~]# yum install *rpm -y


显示集群状态:

[[email protected] ~]# crm status

Last updated: Wed Sep 21 17:31:26 2016          Last change: Wed Sep 21 17:16:32 2016 by root via cibadmin on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

2 nodes and 0 resources configured


Online: [ node1 node2 ]


[[email protected] ~]# yum -y install httpd

[[email protected] ~]# echo "<h1>Node1.magedu.com</h1>" > /var/www/html/index.html 

[[email protected] ~]# systemctl start httpd.service 

[[email protected] ~]# systemctl enable httpd.service

[[email protected] ~]# yum -y install httpd

[[email protected] ~]# echo "<h1>Node2.magedu.com</h1>" > /var/www/html/index.html

[[email protected] ~]# systemctl start httpd.service

[[email protected] ~]# systemctl enable httpd.service


[[email protected] ~]# crm ra

crm(live)ra# cd

crm(live)# configure

crm(live)configure# primitive webip ocf:heartbeat:IPaddr params ip=192.168.1.80


本文出自 “追梦” 博客,请务必保留此出处http://sihua.blog.51cto.com/377227/1855281

以上是关于34J-2 corosync集群初步的主要内容,如果未能解决你的问题,请参考以下文章

corosync+pacemaker实现高可用集群

Linux集群系列——高可用集群之corosync基础概念及安装配置.

corosync+pacemaker的高可用集群

corosync+pacemaker使用pcs构建高可用集群

CentOS 7 corosync高可用集群的实现

corosync+pacemaker构建高可用集群