尝试使用 python 3 运行 Spark 时出现几个错误
Posted
技术标签:
【中文标题】尝试使用 python 3 运行 Spark 时出现几个错误【英文标题】:Several errors while trying to run Spark with python 3 【发布时间】:2018-06-21 11:08:05 【问题描述】:我正在尝试在 python 3
和 ubuntu 18.04
上使用 pyspark
运行 spark,但是,我遇到了一堆我不知道如何处理的不同错误。我正在使用 Java 10 jdk
并且我的 JAVA_HOME
变量已经设置。
这是我试图在 python
上运行的代码:
import sys
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
sc = SparkContext(appName="PysparkStreaming")
ssc = StreamingContext(sc, 3) #Streaming will execute in each 3 seconds
lines = ssc.textFileStream('/home/mabarberan/Escritorio/prueba spark/') #'log/ mean directory name
counts = lines.flatMap(lambda line: line.split(" ")) \
.map(lambda x: (x, 1)) \
.reduceByKey(lambda a, b: a + b)
counts.pprint()
ssc.start()
ssc.awaitTermination()
这些是我得到的错误:
/home/mabarberan/anaconda3/bin/python /home/mabarberan/Descargas/carpeta.py
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/mabarberan/anaconda3/lib/python3.6/site-packages/pyspark/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2018-06-21 12:53:07 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
[Stage 1:> (0 + 1) / 1]2018-06-21 12:53:13 ERROR PythonRunner:91 - Python worker exited unexpectedly (crashed)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/mabarberan/anaconda3/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/worker.py", line 176, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 2.7 than that in driver 3.6, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
我无法在此处粘贴所有错误代码,但这是接下来的内容:
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/mabarberan/anaconda3/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/worker.py", line 176, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 2.7 than that in driver 3.6, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
2018-06-21 12:53:13 ERROR TaskSetManager:70 - Task 0 in stage 1.0 failed 1 times; aborting job
2018-06-21 12:53:13 ERROR JobScheduler:91 - Error running job streaming job 1529578392000 ms.0
org.apache.spark.SparkException: An exception was raised by Python:
Traceback (most recent call last):
File "/home/mabarberan/anaconda3/lib/python3.6/site-packages/pyspark/streaming/util.py", line 65, in call
r = self.func(t, *rdds)
File "/home/mabarberan/anaconda3/lib/python3.6/site-packages/pyspark/streaming/dstream.py", line 171, in takeAndPrint
taken = rdd.take(num + 1)
File "/home/mabarberan/anaconda3/lib/python3.6/site-packages/pyspark/rdd.py", line 1375, in take
res = self.context.runJob(self, takeUpToNumLeft, p)
File "/home/mabarberan/anaconda3/lib/python3.6/site-packages/pyspark/context.py", line 1013, in runJob
sock_info = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
File "/home/mabarberan/anaconda3/lib/python3.6/site-packages/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/home/mabarberan/anaconda3/lib/python3.6/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 0, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/mabarberan/anaconda3/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/worker.py", line 176, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 2.7 than that in driver 3.6, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
Process finished with exit code 1
我用谷歌搜索了这些错误,并看到它们单独发生在其他人身上,但似乎我同时拥有所有这些错误,我尝试了一些我在网上找到的修复,但它们似乎不起作用,所以我米卡住了。我将不胜感激。
提前致谢
【问题讨论】:
【参考方案1】:您正在使用 anaconda python 开始 master 并且工作人员正在使用默认 python。您可以从 .bashrc 中删除 anaconda 路径,或将 spark-env 中的 PYSPARK_PYTHON 和 SPARK_DRIVER_PYTHON 变量设置为要使用的 python 路径。 例如,将下面的行添加到 $SPARK_HOME/conf/spark-env.sh
export PYSPARK_PYTHON=/usr/bin/python3
export SPARK_DRIVER_PYTHON=/usr/bin/python3
【讨论】:
嘿@hamza 金枪鱼,谢谢你的回答。我已经从互联网上获得了该修复程序,尝试过,但仍然遇到同样的问题。 这是我的环境。 .bashrc 中的变量:#spark export SPARK_HOME=/usr/local/spark-2.3.1-bin-hadoop2.7 export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.8.2.1-src.zip:$PYTHONPATH export IPYTHON=1 export PYSPARK_DRIVER_PYTHON=ipython3 export PYSPARK_DRIVER_PYTHON_OPTS="notebook" export PYSPARK_PYTHON=/usr/bin/python3 export SPARK_DRIVER_PYTHON=/usr/bin/python3
我不知道 anaconda 有多重要,但我从 .bashrc 中删除了 anaconda 路径行,它解决了。以上是关于尝试使用 python 3 运行 Spark 时出现几个错误的主要内容,如果未能解决你的问题,请参考以下文章
尝试从 Jupyter Notebook 使用 Spark 访问 Google Cloud Bigtable 时出现区域错误
尝试使用 Spark IDF.fit() 时出现 NULL 指针异常
从 apache Spark 运行 java 程序时出现 ClassNotFound 异常
将 Python UDF 应用于 Spark 数据帧时出现 java.lang.IllegalArgumentException