Python 工作者无法连接回来

Posted

技术标签:

【中文标题】Python 工作者无法连接回来【英文标题】:Python worker failed to connect back 【发布时间】:2018-11-11 19:06:36 【问题描述】:

我是 Spark 的新手,正在尝试完成 Spark 教程: link to tutorial

在本地机器(Win10 64、Python 3、Spark 2.4.0)上安装它并设置所有环境变量(HADOOP_HOME、SPARK_HOME 等)后,我试图通过 WordCount.py 文件运行一个简单的 Spark 作业:

from pyspark import SparkContext, SparkConf

if __name__ == "__main__":
    conf = SparkConf().setAppName("word count").setMaster("local[2]")
    sc = SparkContext(conf = conf)

    lines = sc.textFile("C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/in/word_count.text")
    words = lines.flatMap(lambda line: line.split(" "))
    wordCounts = words.countByValue()

    for word, count in wordCounts.items():
        print(" : ".format(word, count))

从终端运行后:

spark-submit WordCount.py

我得到以下错误。 我检查(通过逐行注释)它在

处崩溃
wordCounts = words.countByValue()

知道我应该检查什么才能使其正常工作吗?

Traceback (most recent call last):
  File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 25, in <module>
ModuleNotFoundError: No module named 'resource'
18/11/10 23:16:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Python worker failed to connect back.
        at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
        at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
        at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
        at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:121)
        at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
        at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
        at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
        at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
        at java.net.PlainSocketImpl.accept(Unknown Source)
        at java.net.ServerSocket.implAccept(Unknown Source)
        at java.net.ServerSocket.accept(Unknown Source)
        at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
        ... 14 more
18/11/10 23:16:58 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
  File "C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/rdd/WordCount.py", line 19, in <module>
    wordCounts = words.countByValue()
  File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 1261, in countByValue
  File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 844, in reduce
  File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 816, in collect
  File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__
  File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure:
Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
        at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
        at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
        at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
        at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:121)
        at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
        at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
        at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
        at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
        at java.net.PlainSocketImpl.accept(Unknown Source)
        at java.net.ServerSocket.implAccept(Unknown Source)
        at java.net.ServerSocket.accept(Unknown Source)
        at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
        ... 14 more

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
        at scala.Option.foreach(Option.scala:257)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
        at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
        at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
        at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
        at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
        at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
        at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:121)
        at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        ... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
        at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
        at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
        at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
        at java.net.PlainSocketImpl.accept(Unknown Source)
        at java.net.ServerSocket.implAccept(Unknown Source)
        at java.net.ServerSocket.accept(Unknown Source)
        at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
        ... 14 more

根据 theplatypus 的建议 - 检查“资源”模块是否可以直接从终端导入 - 显然不是:

>>> import resource
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'resource'

在安装资源方面——我按照this tutorial的说明进行操作:

    从Apache Spark website下载spark-2.4.0-bin-hadoop2.7.tgz 解压到我的C盘 已经安装了 Python_3(Anaconda 发行版)以及 Java 创建本地'C:\hadoop\bin'文件夹来存储winutils.exe 创建了“C:\tmp\hive”文件夹并授予 Spark 访问权限 添加了环境变量(SPARK_HOME、HADOOP_HOME 等)

我应该安装任何额外的资源吗?

【问题讨论】:

【参考方案1】:

我遇到了同样的错误。我安装了以前版本的 Spark(2.3 而不是 2.4)解决了它。现在完美运行了,可能是pyspark最新版本的问题。

【讨论】:

是的。似乎是版本问题。我使用了 2.2.0,它现在正在工作。谢谢【参考方案2】:

问题的核心是pyspark和python之间的联系,通过重新定义环境变量来解决。

我刚刚将环境变量的值 PYSPARK_DRIVER_PYTHONipython 更改为 jupyterPYSPARK_PYTHONpython3 更改为 python

现在我正在使用 Jupyter Notebook、Python 3.7、Java JDK 11.0.6、Spark 2.4.2

【讨论】:

将 PYSPARK_PYTHON 设置为 python 对我有用。非常感谢。【参考方案3】:

设置 Env PYSPARK_PYTHON=python 来修复它。

【讨论】:

在 windows 上使用 pipenv 然后 ''' set PYSPARK_PYTHON=python ''' 解决了问题【参考方案4】:

我遇到了同样的问题。我已经正确设置了所有environment variables,但仍然无法解决

就我而言,

import findspark
findspark.init()

在创建 sparkSession 之前添加它会有所帮助。

我在Windows 10 上使用Visual Studio Code,而spark 版本是3.2.0。 Python版本为3.9

注意:首先检查HADOOP_HOMESPARK_HOMEPYSPARK_PYTHON的路径是否设置正确

【讨论】:

为我工作!缺少 PYSPARK_PYTHON 路径 在下面添加对我有用:import findspark findspark.init()【参考方案5】:

将 Spark 从 2.4.0 降级回 2.3.2 对我来说还不够。我不知道为什么,但在我的情况下,我必须从 SparkSession 创建 SparkContext 像

sc = spark.sparkContext

然后同样的错误消失了。

【讨论】:

【参考方案6】:

查看错误的来源(worker.py#L25),似乎用于实例化 pyspark 工作人员的 python 解释器无法访问resource 模块,这是Python's doc 中提到的内置模块作为“Unix 特定服务”的一部分。

您确定可以在 Windows 上运行 pyspark(至少不需要像 GOW 或 MingW 这样的其他软件),并且不会跳过一些特定于 Windows 的安装步骤吗?

你能不能打开一个 python 控制台(pyspark 使用的那个),看看你是否可以 &gt;&gt;&gt; import resource 而不得到相同的 ModuleNotFoundError ?如果你不这样做,那么你能提供你在 W10 上安装它的资源吗?

【讨论】:

嗨,刚刚编辑了原始问题,添加了您要求的信息。 看来教程中的那个人之前安装了 git,在 Windows 上可能暗示他还安装了一些 Unix 兼容包(Mingw)。也许你也可以尝试安装 git ? 否则,看到this tuto,看来你可以在windows上使用Gnu解决这个问题(GOW) 所以我删除了 Spark 并使用您建议的链接重新安装了它。即使安装了 GOW,也会出现同样的问题(“spark-submit”和“import resource”行)。 你也安装了 git(带有 bash 工具)吗?【参考方案7】:

当您运行 python 安装程序时,在自定义 Python 部分中,确保选择了将 python.exe 添加到路径选项。如果未选择此选项,某些 PySpark 实用程序(例如 pyspark 和 spark-submit)可能无法工作。这对我有用!快乐分享:)

【讨论】:

【参考方案8】:

出现此错误的原因似乎有很多。尽管我的environmental variables 设置正确,我仍然遇到同样的问题。

在我的情况下添加这个

import findspark
findspark.init()

解决了问题。

我在windows10jupyter notebook spark-3.1.2, python3.6 在一起。

【讨论】:

以上是关于Python 工作者无法连接回来的主要内容,如果未能解决你的问题,请参考以下文章

无法修改本地连接IP地址,改了以后自动变回来,不知道是啥原因,来高手

将后端更改为使用 SSL,但我们的 SSL 无法正常工作。我需要改回来,但现在无法访问

电脑程序显示在拓展屏上,但是不连接拓展屏之后,程序依旧不显示回来

GitKraken 7.5.1 无法连接GitHub和GitLab

连接两个以上的表时,Oracle 连接消除无法按预期工作

Python无法通过以下方式连接Jira API