大数据高可用集群环境安装与配置(09)——安装Spark高可用集群

Posted AllEmpty

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了大数据高可用集群环境安装与配置(09)——安装Spark高可用集群相关的知识,希望对你有一定的参考价值。

1. 获取spark下载链接

登录官网:http://spark.apache.org/downloads.html 选择要下载的版本

2. 执行命令下载并安装

cd /usr/local/src/
wget http://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz
tar -zxvf spark-2.4.4-bin-hadoop2.7.tgz
mv spark-2.4.4-bin-hadoop2.7 /usr/local/spark
cd /usr/local/spark/conf
mv spark-env.sh.template spark-env.sh

 

3. 修改spark-env.sh配置

vi spark-env.sh

在尾部添加下面配置,绑定hadoop的配置文件路径

export JAVA_HOME=/usr/local/java/jdk
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/Hadoop
export SPARK_HOME=/usr/local/spark

export SPARK_MASTER_PORT=7077
# 非高可用集群配置
# export SPARK_MASTER_IP=master
# 高可用集群配置
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=master:2181,master-backup:2181 -Dspark.deploy.zookeeper.dir=/spark"

 

4. 添加log4j.properties配置

vi log4j.properties

添加下面配置(如果要关闭控制台上打印的详细日志信息,可以将下面的INFO设置为WARN)

# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

log4j.rootLogger=INFO, file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.append=true
log4j.appender.file.file=${spark.yarn.app.container.log.dir}/spark.log
log4j.appender.file.MaxFileSize=100MB
log4j.appender.file.MaxBackupIndex=10
log4j.logger.org.apache.spark=INFO
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %p [%t] %c{1}:%L - %m%n

# Set the default spark-shell log level to WARN. When running the spark-shell, the
# log level for this class is used to overwrite the root logger\'s log level, so that
# the user can have different defaults for the shell and regular Spark apps.
log4j.logger.org.apache.spark.repl.Main=WARN

# Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=WARN
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR

# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR

 

5. 修改slaves配置

mv slaves.template slaves

vi slaves

删除里面的localhost,添加下面配置

node1
node2
node3

 

6. 指定spark的主节点

mv spark-defaults.conf.template spark-defaults.conf
vi spark-defaults.conf

添加下面配置

spark.master spark://master:7077,master-backup:7077
spark.yarn.jars hdfs://master:9000/spark/jars/*,hdfs://master-backup:9000/spark/jars/*

 

7. 修改服务器系统环境变量

所有服务器都需要按要求修改配置

vi /etc/profile

在尾部添加下面配置

export SPARK_HOME=/usr/local/spark/
export PATH=$PATH:$SPARK_HOME/bin
# 这里根据具体需要进行修改,如果你运行的是python2版本的程序,则不需要修改,python3的话后面需要安装相关环境
export PYSPARK_PYTHON=/usr/local/bin/python3

保存退出后,运行命令,让配置马上生效

source /etc/profile

 

8. 安装插件,配置pyspark访问hbase

拷贝spark访问hbase所需要的jar到spark/jar引用文件夹

cp /usr/local/hbase/lib/hbase-*.jar /usr/local/spark/jars/

配置spark访问Phoenix

# 复制phoenix客户端插件到spark的jars目录下
cp phoenix-5.0.0-HBase-2.0-client.jar /usr/local/spark/jars/

 

9. spark常用插件下载

# spark读取hbase插件
# https://mvnrepository.com/artifact/org.apache.spark/spark-examples_2.11/1.6.0-typesafe-001
wget https://repo.typesafe.com/typesafe/maven-releases/org/apache/spark/spark-examples_2.11/1.6.0-typesafe-001/spark-examples_2.11-1.6.0-typesafe-001.jar
cp spark-examples_2.11-1.6.0-typesafe-001.jar /usr/local/spark/jars/

# spark结构化流读取kafka数据插件
wget https://repo1.maven.org/maven2/org/apache/spark/spark-sql-kafka-0-10_2.11/2.4.4/spark-sql-kafka-0-10_2.11-2.4.4.jar
cp spark-sql-kafka-0-10_2.11-2.4.4.jar /usr/local/spark/jars/

# spark streaming读取kafka数据插件
wget https://repo1.maven.org/maven2/org/apache/spark/spark-streaming-kafka-0-10_2.11/2.4.4/spark-streaming-kafka-0-10_2.11-2.4.4.jar
cp spark-streaming-kafka-0-10_2.11-2.4.4.jar /usr/local/spark/jars/

# mongodb数据库连接驱动
wget https://repo1.maven.org/maven2/org/mongodb/mongo-java-driver/3.10.2/mongo-java-driver-3.10.2.jar
cp mongo-java-driver-3.10.2.jar /usr/local/spark/jars/

# spark连接mongodb,进行读写操作插件
wget https://repo1.maven.org/maven2/org/mongodb/spark/mongo-spark-connector_2.11/2.4.1/mongo-spark-connector_2.11-2.4.1.jar
cp mongo-spark-connector_2.11-2.4.1.jar /usr/local/spark/jars/

 

10. 将spark同步到其他服务器上

rsync -avz /usr/local/spark/ master-backup:/usr/local/spark/
rsync -avz /usr/local/spark/ node1:/usr/local/spark/
rsync -avz /usr/local/spark/ node2:/usr/local/spark/
rsync -avz /usr/local/spark/ node3:/usr/local/spark/

 

11. 启动spark

重启hbase服务

/usr/local/hbase/bin/stop-hbase.sh
/usr/local/hbase/bin/start-hbase.sh

在master服务器上启动spark服务

/usr/local/spark/sbin/start-all.sh

在master-backup服务器上,启动第二个master

/usr/local/spark/sbin/start-master.sh

在master与master-backup服务器输入jps,都可以查看到Master

31681 Master

在其他服务器输入jps

28660 Worker

启动后就可以看到spark的web控制台地址了,在浏览器中输入地址访问,就可以查看到master节点的spark,Status为ALIVE,master-backup节点的spark,Status为STANDBY

http://192.168.10.90:8080/

http://192.168.10.91:8080/

 

12. 测试master切换

首先打开http://192.168.10.90:8080/ 与 http://192.168.10.91:8080/ 页面

在master服务器上输入jps,查看到Master服务的PID

16073 Master

然后输入命令,杀掉Master进程

kill -9 16073

运行scala(不运行的话,刷新页面看不到切换效果)

spark-shell --master spark://master:7077,master-backup:7077

 

接着在浏览器中刷新打开的两个页面,查看Workers是否已切换到另一台服务器上了

 

版权声明:本文原创发表于 博客园,作者为 AllEmpty 本文欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则视为侵权。

作者博客:http://www.cnblogs.com/EmptyFS/

以上是关于大数据高可用集群环境安装与配置(09)——安装Spark高可用集群的主要内容,如果未能解决你的问题,请参考以下文章

大数据高可用集群环境安装与配置(06)——安装Hadoop高可用集群

大数据高可用集群环境安装与配置(07)——安装HBase高可用集群

大数据高可用集群环境安装与配置(05)——安装zookeeper集群

大数据高可用集群环境安装与配置(08)——安装Ganglia监控集群

大数据高可用集群环境安装与配置(04)——安装JAVA运行环境

大数据高可用集群环境安装与配置(02)——配置ntp服务