在我的本地机器上使用 Spark
Posted
技术标签:
【中文标题】在我的本地机器上使用 Spark【英文标题】:using Spark on my local machine 【发布时间】:2016-05-11 23:23:56 【问题描述】:我下载了 Spark,它看起来可以工作。现在我想尝试使用 txt 文件,例如 hamlet.txt。据我了解,要在 Spark 中工作,我需要打开 spark-1.6.1/bin/pyspark 我把 hamlet.txt 放在 spark-1.6.1/bin/ 现在我输入: raw_hamlet = sc.textFile("hamlet.txt") raw_hamlet.take(5)
但是输出是:
Traceback (most recent call last):
File "", line 1, in
File "/Applications/spark-1.6.1/python/pyspark/rdd.py", line 1267, in take
totalParts = self.getNumPartitions()
File "/Applications/spark-1.6.1/python/pyspark/rdd.py", line 356, in getNumPartitions
return self._jrdd.partitions().size()
File "/Applications/spark-1.6.1/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in call
File "/Applications/spark-1.6.1/python/pyspark/sql/utils.py", line 45, in deco
return f(*a, **kw)
File "/Applications/spark-1.6.1/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o50.partitions.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/Users/kate/hamlet.txt
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:251)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:64)
at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
【问题讨论】:
【参考方案1】:1 - 将您的 "spark-1.6.1/bin/"
添加到您的 .bashrc
2 - source .bashrc
3 - 转到您拥有数据集的目录
4 - 从那里运行 pyspark
或 spark-submit
。
【讨论】:
以上是关于在我的本地机器上使用 Spark的主要内容,如果未能解决你的问题,请参考以下文章
如何在远程 Spark 集群上运行本地 Python 脚本?