Dataframe.rdd.map().collect 在 PySpark 中不起作用 [重复]

Posted

技术标签:

【中文标题】Dataframe.rdd.map().collect 在 PySpark 中不起作用 [重复]【英文标题】:Dataframe.rdd.map().collect Does not work in PySpark [duplicate] 【发布时间】:2018-03-09 01:55:35 【问题描述】:

我对 Python 很陌生。使用 Python 2.7

我正在尝试运行这个简单的代码。

我正在从 CSV 文件创建此 DF。

此数据框只有 2 列。我试过下面的代码 sn-ps 但每次尝试都会失败

newDf = fullDf.rdd.map(lambda x: str(x[1])).collect()   # FAILS
newDf = fullDf.rdd.map(lambda x: x.split(",")[1]).collect()   # FAILS

这里有什么问题。在 Scala-Spark 中也是如此。

我的 Spark 版本是 2.1.0,Python 版本是 2.7

我不知道这个错误是什么:

18/03/08 17:47:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/03/08 17:47:17 ERROR Executor: Exception in task 1.0 in stage 3.0 (TID 5)
java.io.IOException: Cannot run program "python": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:120)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:67)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:116)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:128)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(ProcessImpl.java:386)
    at java.lang.ProcessImpl.start(ProcessImpl.java:137)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
    ... 13 more
18/03/08 17:47:17 ERROR Executor: Exception in task 2.0 in stage 3.0 (TID 6)
java.io.IOException: Cannot run program "python": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:120)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:67)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:116)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:128)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(ProcessImpl.java:386)
    at java.lang.ProcessImpl.start(ProcessImpl.java:137)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
    ... 13 more
18/03/08 17:47:17 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 4)
java.io.IOException: Cannot run program "python": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:120)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:67)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:116)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:128)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(ProcessImpl.java:386)
    at java.lang.ProcessImpl.start(ProcessImpl.java:137)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
    ... 13 more
18/03/08 17:47:17 WARN TaskSetManager: Lost task 1.0 in stage 3.0 (TID 5, localhost, executor driver): java.io.IOException: Cannot run program "python": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:120)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:67)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:116)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:128)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(ProcessImpl.java:386)
    at java.lang.ProcessImpl.start(ProcessImpl.java:137)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
    ... 13 more

18/03/08 17:47:17 ERROR TaskSetManager: Task 1 in stage 3.0 failed 1 times; aborting job
Traceback (most recent call last):
  File "C:/Test.py", line 25, in <module>
    newDf = fullDf.rdd.map(lambda x: x.split(",")[1]).collect()
  File "C:\spark-2.1.0\python\lib\pyspark.zip\pyspark\rdd.py", line 809, in collect
  File "C:\spark-2.1.0\python\lib\py4j-0.10.4-src.zip\py4j\java_gateway.py", line 1133, in __call__
  File "C:\spark-2.1.0\python\lib\pyspark.zip\pyspark\sql\utils.py", line 63, in deco
  File "C:\spark-2.1.0\python\lib\py4j-0.10.4-src.zip\py4j\protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 3.0 failed 1 times, most recent failure: Lost task 1.0 in stage 3.0 (TID 5, localhost, executor driver): java.io.IOException: Cannot run program "python": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:120)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:67)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:116)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:128)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(ProcessImpl.java:386)
    at java.lang.ProcessImpl.start(ProcessImpl.java:137)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
    ... 13 more

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453)
    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Cannot run program "python": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:120)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:67)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:116)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:128)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    ... 1 more
Caused by: java.io.IOException: CreateProcess error=2, The system cannot 

【问题讨论】:

是的,正在创建数据框:仅供参考:fullDf = spark.read.csv('C:\\data.csv', header=True, inferSchema=True) print 'FullDF', type( fullDf) 打印 fullDf.count() ------------------------------------------ ------------------------- 这是输出 ::: ----------------- -------------------------------------------------- FullDF 173753 这并没有解决我的问题。我在我的 Windows 环境变量上设置了指向我的 Anaconda Python 2.7 的 SPARK_PYTHON 变量(不知道我是否需要重新启动我正在使用 Windows 10),我也在这两个地方设置了 SPARK_HOME(Windows env 和 PyCharm)仍然给出同样的错误 您需要设置 PYTHONPATH 变量也指向 python 安装。 SPARK_PYTHON 不应该指向 Anaconda Python,而应该指向 SPARK_HOME 中的 python 库。希望这会有所帮助。 对于 SPARK_PYTHON,你的意思是这个 --> C:\spark-2.1.0\python 或者这个 --> C:\spark-2.1.0\python\lib 或者这个 --> C: \spark-2.1.0\python\pyspark ??? PYTHONPATH--&gt;C:\spark-2.1.0\python\pyspark 。这是需要的。试试看 【参考方案1】:

当我使用 Anaconda Python (2.7) 时,安装时我没有选中将 python 添加到 Windows PATH 的选项。这就是为什么它没有得到python的原因。

为了解决这个问题,我不得不使用 SETX 命令来设置 Python 和 Conda

SETX PATH "%PATH%;C:\anaconda2\Scripts;C:\anaconda2;C:\spark-2.1.0\python\pyspark"

参考 -> Setting Anaconda Python

【讨论】:

以上是关于Dataframe.rdd.map().collect 在 PySpark 中不起作用 [重复]的主要内容,如果未能解决你的问题,请参考以下文章

mysql命令

mysql语句 索引操作

STL 之copy()

Laravel 集合计数结果

如何在Scala中添加另一个参数时传递可变参数?

html 放置在collection.liquid底部的JavaScript标记,用于处理.sort-by select和.coll-filter选择。