从 HDFS 加载数据 -Spark Scala [重复]
Posted
技术标签:
【中文标题】从 HDFS 加载数据 -Spark Scala [重复]【英文标题】:Load data from HDFS -Spark Scala [duplicate] 【发布时间】:2016-12-23 15:55:19 【问题描述】:我有一个带有 SBT 的自包含应用程序,我想从 HDFS 加载我的数据,我使用了这个命令:
val loadfiles1 = sc.textFile("hdfs:///tmp/MySimpleProject/file1.dat")
但出现这样的错误:
[error] (run-main-0) java.io.IOException: Incomplete HDFS URI, no host: hdfs:/tmp/MyProjectSpark/file1.dat
java.io.IOException: Incomplete HDFS URI, no host: hdfs:/tmp/MyProjectSpark/file1.dat
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:133)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:221)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1930)
at org.apache.spark.rdd.RDD.count(RDD.scala:1134)
at app$.main(App.scala:33)
at app.main(App.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
[trace] Stack trace suppressed: run last compile:run for the full output.
16/12/23 15:19:16 ERROR ContextCleaner: Error in cleaning thread
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:175)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1249)
at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:172)
at org.apache.spark.ContextCleaner$$anon$1.run(ContextCleaner.scala:67)
16/12/23 15:19:16 ERROR Utils: uncaught error in thread SparkListenerBus, stopping SparkContext
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:996)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:317)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:80)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1249)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
16/12/23 15:19:16 INFO SparkUI: Stopped Spark web UI at http://10.0.2.15:4040
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 10 s, completed Dec 23, 2016 3:19:17 PM
16/12/23 15:19:17 INFO DiskBlockManager: Shutdown hook called
16/12/23 15:19:17 INFO ShutdownHookManager: Shutdown hook called
16/12/23 15:19:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-515b242b-7450-4215-9831-8e6976cb41ba
16/12/23 15:19:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-515b242b-7450-4215-9831-8e6976cb41ba/userFiles-ee18e822-55c7-4613-b3f7-03e5a4c896e1
为什么所有这些错误,只是我想从 HDFS 加载文件。 spark上下文的配置如下:
val conf = new SparkConf().setAppName("My first project hadoop spark").setMaster("local[4]")
val sc = new SparkContext(conf)
并且site-core.xml文件中hdfs的配置如下:
<property>
<name>fs.defaultFS</name>
<value>hdfs://sandbox.hortonworks.com:8020</value>
<final>true</final>
</property>
谢谢。
【问题讨论】:
您是否需要在开头使用 3 个斜线(或只需 2 个)?///
【参考方案1】:
Stacktrace 说得很清楚
不完整的 HDFS URI,没有主机:hdfs:/tmp/MyProjectSpark/file1.dat
请指定 hdfs namenode 主机和可选端口(默认 8020,如果不同请指定)。
类似这样的东西(假设 localhost 是你的名称节点):
hdfs://localhost:8020/tmp/MyProjectSpark/file1.dat
【讨论】:
这已经是链接评论中的answered。这是重复的 谢谢。你能告诉我如何知道运行我的应用程序的服务器数量,我如何访问或查看它们,因为我通过 http://以上是关于从 HDFS 加载数据 -Spark Scala [重复]的主要内容,如果未能解决你的问题,请参考以下文章