Spark 2.4 standalone 部署

Posted zhance

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Spark 2.4 standalone 部署相关的知识,希望对你有一定的参考价值。

1 安装 Spark

使用如下命令下载 Spark下载地址

wget http://mirrors.hust.edu.cn/apache/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz

解压 tgz 包:

tar zxvf spark-2.4.0-bin-hadoop2.7.tgz

修改环境变量并声明:

vim ~/.bashrc

export SPARK_HOME=$HOME/spark-2.4.0-bin-hadoop2.7
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

source ~/.bashrc

打开配置文件:

cd $SPARK_HOME/conf
cp spark-env.sh.template spark-env.sh
vim spark-env.sh

添加变量:

export JAVA_HOME=/usr/java/jdk1.8.0_191-amd64
export SCALA_HOME=/usr/java/scala-2.11.8
export SPARK_HOME=/root/spark-2.4.0-bin-hadoop2.7
export SPARK_MASTER_IP=** your ip **
export SPARK_EXECUTOR_MEMORY=1G

2 部署 Standalone Spark

启动 spark

cd $SPARK_HOME
./sbin/start-all.sh

检查是否启动成功(使用 spark 客户端):

./bin/spark-shell

输出以下表示成功:

2018-12-26 11:20:33 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://spark-2:4040
Spark context available as ‘sc‘ (master = local[*], app id = local-1545841253565).
Spark session available as ‘spark‘.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _ / _ / _ `/ __/  ‘_/
   /___/ .__/\_,_/_/ /_/\_   version 2.4.0
      /_/
         
Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_191)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 

以上是关于Spark 2.4 standalone 部署的主要内容,如果未能解决你的问题,请参考以下文章

Spark集群安装部署(基于Standalone模式)

Spark安装部署(local和standalone模式)

Spark standalone下的运行过程

SparkSpark的Standalone模式安装部署

spark的部署方式standalone和yarn有啥区别

[学习笔记]黑马程序员Spark全套视频教程,4天spark3.2快速入门到精通,基于Python语言的spark教程