Spark-Hive 错误,我该如何解决?

Posted

技术标签:

【中文标题】Spark-Hive 错误,我该如何解决?【英文标题】:Spark-Hive error, how I can resolve? 【发布时间】:2017-02-25 13:03:57 【问题描述】:

我尝试编写一个简单的代码来使用 SparkSql 访问 Hive 表:

SparkSession spark = SparkSession.builder()
                                 .appName("Java Spark Hive Example")
                                 .master("local[*]")
                                 .config("hive.metastore.uris", "thrift://localhost:9083")
                                 .enableHiveSupport()
                                 .getOrCreate();

try
    Dataset<Row> df = spark.sql("select survey_response_value from health");
    df.show();
 catch (Exception AnalysisException) 
    System.out.print("\nTable is not found\n");

我在我的系统上多次运行这个特定的程序,它运行良好。但是突然之间它停止工作并开始出错。 以下是错误和跟踪的完整列表:https://justpaste.it/13w2r 我正在使用 IntelliJ。

我没有对依赖项或代码做任何事情。所以我不明白是什么让代码不起作用。我怎样才能摆脱它?请帮助我。 问题来了:

17:22:50.442 [main] INFO  org.apache.spark.SparkContext - Created broadcast 0 from show at hivespark.java:29
Exception in thread "main" java.lang.ExceptionInInitializerError
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
    at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:225)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:308)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
    at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
    at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2377)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2113)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2112)
    at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2795)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2112)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2327)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:636)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:595)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:604)
    at sparky.hivespark.main(hivespark.java:29)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Jackson version is too old 2.5.1
    at com.fasterxml.jackson.module.scala.JacksonModule$class.setupModule(JacksonModule.scala:56)
    at com.fasterxml.jackson.module.scala.DefaultScalaModule.setupModule(DefaultScalaModule.scala:19)
    at com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:651)
    at org.apache.spark.rdd.RDDOperationScope$.<init>(RDDOperationScope.scala:82)
    at org.apache.spark.rdd.RDDOperationScope$.<clinit>(RDDOperationScope.scala)
    ... 25 more
17:22:50.612 [Thread-2] INFO  org.apache.spark.SparkContext - Invoking stop() from shutdown hook

【问题讨论】:

你能把构建文件放在范围内,具体来说是faster-xml依赖 【参考方案1】:

如果您使用的是 SBT,则添加 ....

这里我提到了 2.8.x,你也可以提到任何在你的环境中兼容 2.5 以上的版本

// https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-core
libraryDependencies += "com.fasterxml.jackson.core" % "jackson-core" % "2.8.7"

// https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind
libraryDependencies += "com.fasterxml.jackson.core" % "jackson-databind" % "2.8.7"

如果你使用的是 maven

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-core</artifactId>
    <version>2.8.7</version>
</dependency>



<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.8.7</version>
</dependency>

【讨论】:

在此之后您还面临其他问题吗?请回复。

以上是关于Spark-Hive 错误,我该如何解决?的主要内容,如果未能解决你的问题,请参考以下文章

我该如何解决这个错误,它的原因是啥?

什么是 NoSuchMethod 错误,我该如何解决?

为啥颤振向我显示此错误,我该如何解决?

我该如何解决这个错误,SQLSTATE [42000]? [复制]

错误!我该如何解决? [关闭]

什么是未定义的引用/未解决的外部符号错误,我该如何解决?