第一次运行 Spark、PySpark
Posted
技术标签:
【中文标题】第一次运行 Spark、PySpark【英文标题】:Running Spark, PySpark first time 【发布时间】:2018-08-19 22:58:22 【问题描述】:我买了一本书 - 尝试学习 Spark。 下载它并遵循正确的步骤后,我在加载 spark-shell 和 pyspark 时遇到问题。想知道是否有人可以指出我需要做什么才能运行 spark-shell 或 pyspark
这就是我所做的。
我创建了文件夹 C:\spark 并将 Spark tar 中的所有文件放入该文件夹中。
我还创建了 c:\hadoop\bin 并将 winutils.exe 放入文件夹中。
做了以下事情:
> set SPARK_HOME=c:\spark
> set HADOOP_HOME=c:\hadoop
> set PATH=%SPARK_HOME%\bin;%PATH%
> set PATH=%HADOOP_HOME%\bin;%PATH%
> set PYTHONPATH=C:\Users\AppData\Local\Continuum\anaconda3
创建 C:\tmp\hive 并执行以下操作:
> cd c:\hadoop\bin
> winutils.exe chmod -R 777 C:\tmp\hive
还做了以下事情:
> set PYSPARK_PYTHON=C:\Users\AppData\Local\Continuum\anaconda3\python
> set PYSPARK_DRIVER_PYTHON=C:\Users\AppData\Local\Continuum\anaconda3\ipython
也是QQ,我尝试通过执行以下操作来检查并确认我设置了环境变量SPARK_HOME (我认为这就是我的做法。这是查看我是否正确设置环境变量的正确方法吗?)
>echo %SPARK_HOME%
我刚回来 %SPARK_HOME%
我也做过:
>echo %PATH%
我在 CMD 上打印的目录中没有看到 %SPARK_HOME%\bin 或 %HADOOP_HOME%\bin。
当我最终尝试运行 pyspark 时:
C:\spark\bin>pyspark
我收到以下错误消息:
Missing Python executable 'C:\Users\AppData\Local\Continuum\anaconda3\pyth
on', defaulting to 'C:\spark\bin\..' for SPARK_HOME environment variable. Please
install Python or specify the correct Python executable in PYSPARK_DRIVER_PYTHON or
PYSPARK_PYTHON environment variable to detect SPARK_HOME safely.
'C:\Users\AppData\Local\Continuum\anaconda3\ipython' is not recognized as
an internal or external command,
operable program or batch file.
当我尝试运行 spark-shell 时:
C:\spark\bin>spark-shell
我收到以下错误消息:
Missing Python executable 'C:\Users\AppData\Local\Continuum\anaconda3\pyth
on', defaulting to 'C:\spark\bin\..' for SPARK_HOME environment variable.
Please install Python or specify the correct Python executable in PYSPARK_DRIVER_PYTHO
N or PYSPARK_PYTHON environment variable to detect SPARK_HOME safely.
'C:\Users\AppData\Local\Continuum\anaconda3\ipython' is not recognized as
an internal or external command,
operable program or batch file.
C:\spark\bin>spark-shell
Missing Python executable 'C:\Users\AppData\Local\Continuum\anaconda3\pyth
on', defaulting to 'C:\spark\bin\..' for SPARK_HOME environment variable.
Please
install Python or specify the correct Python executable in PYSPARK_DRIVER_PYTHO
N or PYSPARK_PYTHON environment variable to detect SPARK_HOME safely.
2018-08-19 18:29:01 ERROR Shell:397 - Failed to locate the winutils binary in th
e hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in
the Ha
doop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(Secur
ityUtil.java:611)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupI
nformation.java:273)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(Use
rGroupInformation.java:261)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(
UserGroupInformation.java:791)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGrou
pInformation.java:761)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGr
oupInformation.java:634)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils
.scala:2467)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils
.scala:2467)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2467)
at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:220)
at org.apache.spark.deploy.SparkSubmit$.secMgr$lzycompute$1(SparkSubmit.
scala:408)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSub
mit$$secMgr$1(SparkSubmit.scala:408)
at org.apache.spark.deploy.SparkSubmit$$anonfun$doPrepareSubmitEnvironme
nt$7.apply(SparkSubmit.scala:416)
at org.apache.spark.deploy.SparkSubmit$$anonfun$doPrepareSubmitEnvironme
nt$7.apply(SparkSubmit.scala:416)
at scala.Option.map(Option.scala:146)
at org.apache.spark.deploy.SparkSubmit$.doPrepareSubmitEnvironment(Spark
Submit.scala:415)
at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSu
bmit.scala:250)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:171)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2018-08-19 18:29:01 WARN NativeCodeLoader:62 - Unable to load native-hadoop
lib
rary for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
setLogLeve
l(newLevel).
2018-08-19 18:29:08 WARN Utils:66 - Service 'SparkUI' could not bind on port 40
40. Attempting port 4041.
Spark context Web UI available at http://NJ1-BCTR-10504.usa.fxcorp.prv:4041
Spark context available as 'sc' (master = local[*], app id = local-1534717748215
).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.1
/_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_181)
Type in expressions to have them evaluated.
Type :help for more information.
scala>
【问题讨论】:
你会在 anaconda3/scripts 文件夹下找到 ipython。不在anaconda3下。 【参考方案1】:我发现您的设置中缺少以下内容
1.
Apache Spark 需要 Java 1.6 或更高版本,请确保安装 jdk(最新版本)并为 Java 设置环境变量路径。
C:\Program Files\Java\jdk1.8.0_172\bin
尝试在 cmd 提示符下运行下面提到的简单 Java 命令,以验证 Java 是否正确安装在您的机器上:
java --version
成功安装 Java 后,将 spark 的环境变量设置为
C:\Spark
由于您在本地系统上运行 spark,因此无需设置“Hadoop_home”,因为 spark 可以运行独立的资源导航器
2.
要让 pyspark 工作,您可能需要安装 pyspark python 包
点安装 pyspark
日志设置:很高兴
我看到你的日志太冗长了,你可以用 spark/conf 文件夹下的“log4j.properties”文件来控制不显示信息。
【讨论】:
【参考方案2】:我会下载带有所有配置的 VM(完全配置的节点)https://mapr.com/products/mapr-sandbox-hadoop/ 您可以将 Spark 与 hdfs、hive 和任何其他工具一起使用。
【讨论】:
以上是关于第一次运行 Spark、PySpark的主要内容,如果未能解决你的问题,请参考以下文章