hadoop+zookeeper+yarn+spark高可用主从备份启动步骤
Posted 风起时的悟
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了hadoop+zookeeper+yarn+spark高可用主从备份启动步骤相关的知识,希望对你有一定的参考价值。
环境搭建参考于 https://www.cnblogs.com/zimo-jing/p/8892697.html
1、所有节点依次启动zookeeper
# zkServer.sh start
>>
7915 QuorumPeerMain
7915 QuorumPeerMain
2、所有节点依次启动journalnode 保证数据同步(此步可省略,3将启动这一步)
# hadoop-daemon.sh start journalnode
>>
7915 QuorumPeerMain
8109 JournalNode
7915 QuorumPeerMain
3、主节点启动hdfs
# start-dfs.sh
>>
8279 NameNode
7915 QuorumPeerMain
8604 DFSZKFailoverController
8109 JournalNode
UI界面 :
master http://master:50070
slave1 http://slave1:50070
4、主节点启动yarn slave1也要执行
# zkServer.sh start
-----------master-------------
>>
8279 NameNode
7915 QuorumPeerMain
8604 DFSZKFailoverController
8732 ResourceManager
8109 JournalNode
-----------slave1-------------
>>
8192 DataNode
8113 NameNode
8481 NodeManager
8006 JournalNode
8348 DFSZKFailoverController
7935 QuorumPeerMain
UI界面:
master:8088
slave1:8088
(注意: slave1UI界面会跳转到master)
5、启动spark
-----------master-------------
# start-master.sh
# start-slaves.sh
>>
8279 NameNode
7915 QuorumPeerMain
8604 DFSZKFailoverController
8732 ResourceManager
8109 JournalNode
9150 Master
9374 Worker
-----------slave1-------------
# start-master.sh
>>
8192 DataNode
8113 NameNode
8481 NodeManager
8786 Master
8006 JournalNode
8348 DFSZKFailoverController
9164 Worker
7935 QuorumPeerMain
界面:主 http://master:8081/
从 http://slave1:8082/
6、提交任务
yarn:client模式
spark-submit --master yarn --conf spark.pyspark.python=/usr/python3/python --deploy-mode client hello.py
spark: client模式
spark-submit --master spark://master:7077 --conf spark.pyspark.python=python3 --deploy-mode client hello.py
完!
以上是关于hadoop+zookeeper+yarn+spark高可用主从备份启动步骤的主要内容,如果未能解决你的问题,请参考以下文章