基于zookeeper的高可用集群

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了基于zookeeper的高可用集群相关的知识,希望对你有一定的参考价值。

1.准备zookeeper服务器

#node1,node2,node3
#安装请参考http://suyanzhu.blog.51cto.com/8050189/1946580


2.准备NameNode节点

#node1,node4


3.准备JournalNode节点

#node2,node3,node4


4.准备DataNode节点

#node2,node3,node4
#启动DataNode节点命令hadoop-daemon.sh start datanode


5.修改hadoop的hdfs-site.xml配置文件

<configuration>
        <property>
                <name>dfs.nameservices</name>
                <value>yunshuocluster</value>
        </property>
        <property>
                <name>dfs.ha.namenodes.yunshuocluster</name>
                <value>nn1,nn2</value>
        </property>
        <property>
                <name>dfs.namenode.rpc-address.yunshuocluster.nn1</name>
                <value>node1:8020</value>
        </property>
        <property>
                <name>dfs.namenode.rpc-address.yunshuocluster.nn2</name>
                <value>node4:8020</value>
        </property>
        <property>
                <name>dfs.namenode.http-address.yunshuocluster.nn1</name>
                <value>node1:50070</value>
        </property>
        <property>
                <name>dfs.namenode.http-address.yunshuocluster.nn2</name>
                <value>node4:50070</value>
        </property>
        <property>
                <name>dfs.namenode.shared.edits.dir</name>
                <value>qjournal://node2:8485;node3:8485;node4:8485/yunshuocluste
r</value>
        </property>
        <property>
                <name>dfs.client.failover.proxy.provider.mycluster</name>
                <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailo
verProxyProvider</value>
        </property>
        <property>
                <name>dfs.ha.fencing.methods</name>
                <value>sshfence</value>
        </property>
        <property>
                <name>dfs.ha.fencing.ssh.private-key-files</name>
                <value>/root/.ssh/id_dsa</value>
        </property>
        <property>
                <name>dfs.journalnode.edits.dir</name>
                <value>/opt/journalnode/</value>
        </property>
        <property>
                <name>dfs.ha.automatic-failover.enabled</name>
                <value>true</value>
        </property>
</configuration>


6.修改hadoop的core-site.xml配置文件

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://yunshuocluster</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop-2.5</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>node1:2181,node2:2181,node3:2181</value>
    </property>
</configuration>


7.配置slaves配置文件

node2
node3
node4


8.启动zookeeper(node1,node2,node3)

zkServer.sh start


9.启动Journalnode(node2,node3,node4上分别执行下面的命令)

#启动命令 停止命令hadoop-daemon.sh stop journalnode
hadoop-daemon.sh start journalnode


10.检查Journalnode,通过查看日志

cd /home/hadoop-2.5.1/logs
ls
tail -200 hadoop-root-journalnode-node2.log


11.格式化NameNode(两台中的一台,这里格式化node4这台NameNode节点)

hdfs namenode -format

cd /opt/hadoop-2.5
#两台NameNode同步完成
scp -r /opt/hadoop-2.5/* [email protected]:/opt/hadoop-2.5/


12.初始化zkfc

hdfs zkfc -formatZK


13.启动服务

start-dfs.sh
#stop-dfs.sh表示停止服务


本文出自 “素颜” 博客,请务必保留此出处http://suyanzhu.blog.51cto.com/8050189/1946843

以上是关于基于zookeeper的高可用集群的主要内容,如果未能解决你的问题,请参考以下文章

(超详细)基于Zookeeper的Hadoop HA集群的搭建

Flink基于ZookeeperCurator的高可用原理1

Zookeeper原理解析

四ActiveMQ Zookeeper集群

(十四)从零开始搭建k8s集群——使用KubeSphere管理平台搭建一个高可用的基于Raft的kafka集群服务

Nacos的高可用部署