Spark 2.0.0 SPARK-SQL returns NPE Error
Posted Thinking in 毛毛
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Spark 2.0.0 SPARK-SQL returns NPE Error相关的知识,希望对你有一定的参考价值。
com.esotericsoftware.kryo.KryoException: java.lang.NullPointerException
Serialization trace:
underlying (org.apache.spark.util.BoundedPriorityQueue)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:144)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:793)
at com.twitter.chill.SomeSerializer.read(SomeSerializer.scala:25)
at com.twitter.chill.SomeSerializer.read(SomeSerializer.scala:19)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:793)
at org.apache.spark.serializer.KryoSerializerInstance.deserialize(KryoSerializer.scala:312)
at org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:87)
at org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:66)
at org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:57)
at org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:57)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1793)
at org.apache.spark.scheduler.TaskResultGetter$$anon$2.run(TaskResultGetter.scala:56)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.spark.sql.catalyst.expressions.codegen.LazilyGeneratedOrdering.compare(GenerateOrdering.scala:157)
at org.apache.spark.sql.catalyst.expressions.codegen.LazilyGeneratedOrdering.compare(GenerateOrdering.scala:148)
at scala.math.Ordering$$anon$4.compare(Ordering.scala:111)
at java.util.PriorityQueue.siftUpUsingComparator(PriorityQueue.java:669)
at java.util.PriorityQueue.siftUp(PriorityQueue.java:645)
at java.util.PriorityQueue.offer(PriorityQueue.java:344)
at java.util.PriorityQueue.add(PriorityQueue.java:321)
at com.twitter.chill.java.PriorityQueueSerializer.read(PriorityQueueSerializer.java:78)
at com.twitter.chill.java.PriorityQueueSerializer.read(PriorityQueueSerializer.java:31)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:711)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
... 15 more
16/05/24 09:42:53 ERROR SparkSQLDriver: Failed in [ select
dt.d_year
,item.i_brand_id brand_id
,item.i_brand brand
,sum(ss_ext_sales_price) sum_agg
from date_dim dt
,store_sales
,item
where dt.d_date_sk = store_sales.ss_sold_date_sk
and store_sales.ss_item_sk = item.i_item_sk
and item.i_manufact_id = 436
and dt.d_moy=12
group by dt.d_year
,item.i_brand
,item.i_brand_id
order by dt.d_year
,sum_agg desc
,brand_id
limit 100]
莫名其妙的出现空指针异常~后来在网上发现其他人的类似情况:
When kryo serialization is used, the query fails when ORDER BY and LIMIT is combined. After removing either ORDER BY or LIMIT clause, the query also runs.
查了一下,发现是spark 2.0.0对kryo序列化的依赖有bug,到SPARK_HOME/conf/spark-defaults.conf
默认为 :
# spark.serializer org.apache.spark.serializer.KryoSerializer
改成:
spark.serializer org.apache.spark.serializer.JavaSerializer
以上是关于Spark 2.0.0 SPARK-SQL returns NPE Error的主要内容,如果未能解决你的问题,请参考以下文章
spark-sql 与 spark-shell REPL 中的 Spark SQL 性能差异
我们什么时候应该使用Spark-sql,什么时候应该使用Spark RDD