HA分布式集群配置三 spark集群配置
Posted SparkDr
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了HA分布式集群配置三 spark集群配置相关的知识,希望对你有一定的参考价值。
(一)HA下配置spark
1,spark版本型号:spark-2.1.0-bin-hadoop2.7
2,解压,修改配置环境变量
tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz mv spark-2.1.0-bin-hadoop2.7 /usr/spark-2.1.0 vim /etc/profile export JAVA_HOME=/usr/java export SCALA_HOME=/usr/scala export HADOOP_HOME=/usr/hadoop-2.7.3 export ZK_HOME=/usr/zookeeper-3.4.8 export mysql_HOME=/usr/local/mysql export HIVE_HOME=/usr/hive-2.1.1 export SPARK_HOME=/usr/spark-2.1.0 export PATH=$SPARK_HOME/bin:$HIVE_HOME/bin:$MYSQL_HOME/bin:$ZK_HOME/bin:$JAVA_HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
3,修改spark-env.sh文件
cd $SPARK_HOME/conf vim spark-env.sh #添加 export JAVA_HOME=/usr/java export SCALA_HOME=/usr/scala export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=ha1:2181,ha2:2181,ha3:2181 -Dspark.deploy.zookeeper.dir=/spark" export HADOOP_CONF_DIR=/usr/hadoop-2.7.3/conf/etc/hadoop export SPARK_MASTER_PORT=7077 export SPARK_EXECUTOR_INSTANCES=1 export SPARK_WORKER_INSTANCES=1 export SPARK_WORKER_CORES=1 export SPARK_WORKER_MEMORY=1024M export SPARK_MASTER_WEBUI_PORT=8080 export SPARK_CONF_DIR=/usr/spark-2.1.0/conf
4,修改slaves文件
vim slaves #添加 ha2 ha3 ha4
5,分发及启动
cd /usr scp -r spark-2.1.0 [email protected]:/usr scp -r spark-2.1.0 [email protected]:/usr scp -r spark-2.1.0 [email protected]:/usr scp -r spark-2.1.0 [email protected]:/usr #在ha1上 ./$SPARK_HOME/sbin/start-all.sh #ha2,ha3上 ./$SPARK_HOME/sbin/start-master.sh
各个节点jps情况:
[[email protected] spark-2.1.0]# jps 2464 NameNode 2880 ResourceManager 2771 DFSZKFailoverController 3699 Jps 2309 QuorumPeerMain 3622 Master [[email protected] zookeeper-3.4.8]# jps 2706 NodeManager 3236 Jps 2485 JournalNode 3189 Worker 2375 DataNode 2586 DFSZKFailoverController 2236 QuorumPeerMain 2303 NameNode 3622 Master [[email protected] zookeeper-3.4.8]# jps 2258 DataNode 2466 NodeManager 2197 QuorumPeerMain 2920 Jps 2873 Worker 2331 JournalNode 3622 Master [[email protected] ~]# jps 2896 Jps 2849 Worker 2307 JournalNode 2443 NodeManager 2237 DataNode
6,关机,快照 sparkok
#启动集群顺序 #ha1,ha2,ha3 cd $ZK_HOME ./bin/zkServer.sh start #ha1 cd $HADOOP_HOME ./sbin/start-all.sh cd $SPARK_HOME ./sbin/start-all.sh #ha2,ha3 ./sbin/start-master.sh
以上是关于HA分布式集群配置三 spark集群配置的主要内容,如果未能解决你的问题,请参考以下文章
Spark笔记整理:spark单机安装部署分布式集群与HA安装部署+spark源码编译
CentOS7+Hadoop2.7.2(HA高可用+Federation联邦)+Hive1.2.1+Spark2.1.0 完全分布式集群安装
CentOS7+Hadoop2.7.2(HA高可用+Federation联邦)+Hive1.2.1+Spark2.1.0 完全分布式集群安装