运行时异常 java.lang.NoSuchMethodError: com.google.common.base.Optional.toJavaUtil()L 与 Spark-BigQuery 连接
Posted
技术标签:
【中文标题】运行时异常 java.lang.NoSuchMethodError: com.google.common.base.Optional.toJavaUtil()L 与 Spark-BigQuery 连接器【英文标题】:Runtime Exception java.lang.NoSuchMethodError: com.google.common.base.Optional.toJavaUtil()L with Spark-BigQuery connector 【发布时间】:2021-03-29 19:04:00 【问题描述】:目前我正在尝试从 Spark 连接到 BigQuery。我已经使用sbt assembly
插件构建了胖 jar 文件,并尝试使用spark-submit
在本地模式下启动该作业。一旦 Spark 作业启动,我就会观察到 java.lang.NoSuchMethodError: com.google.common.base.Optional.toJavaUtil()Ljava/util/Optional;
异常。
下面是异常跟踪,
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Optional.toJavaUtil()Ljava/util/Optional;
at com.google.cloud.spark.bigquery.SparkBigQueryConfig.getOption(SparkBigQueryConfig.java:265)
at com.google.cloud.spark.bigquery.SparkBigQueryConfig.getOption(SparkBigQueryConfig.java:256)
at com.google.cloud.spark.bigquery.SparkBigQueryConfig.lambda$getOptionFromMultipleParams$7(SparkBigQueryConfig.java:273)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.Spliterators$IteratorSpliterator.tryAdvance(Spliterators.java:1812)
at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126)
at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:499)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:486)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:152)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.findFirst(ReferencePipeline.java:464)
at com.google.cloud.spark.bigquery.SparkBigQueryConfig.getOptionFromMultipleParams(SparkBigQueryConfig.java:275)
at com.google.cloud.spark.bigquery.SparkBigQueryConfig.from(SparkBigQueryConfig.java:119)
at com.google.cloud.spark.bigquery.BigQueryRelationProvider.createSparkBigQueryConfig(BigQueryRelationProvider.scala:133)
at com.google.cloud.spark.bigquery.BigQueryRelationProvider.createRelationInternal(BigQueryRelationProvider.scala:71)
at com.google.cloud.spark.bigquery.BigQueryRelationProvider.createRelation(BigQueryRelationProvider.scala:45)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:340)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:174)
at com.bigquery.OwnDataSetReader$.delayedEndpoint$com$$bigquery$OwnDataSetReader$1(OwnDataSetReader.scala:18)
at com.bigquery.OwnDataSetReader$delayedInit$body.apply(OwnDataSetReader.scala:6)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.App$class.main(App.scala:76)
at com..bigquery.OwnDataSetReader$.main(OwnDataSetReader.scala:6)
at com..bigquery.OwnDataSetReader.main(OwnDataSetReader.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
在对该异常进行了一些研究后,我发现这个异常可能是由于guava
库的多个版本而发生的。我确保最终构建 jar 中没有此类冲突,我还通过反编译我的 jar 文件来验证它。没有观察到冲突,但问题仍然存在:(。下面是build.sbt
sn-p,
name := "bigquer-connector"
version := "0.1"
scalaVersion := "2.11.8"
test in assembly :=
assemblyJarName in assembly := "BigQueryConnector.jar"
assemblyMergeStrategy in assembly :=
case x if x.startsWith("META-INF") => MergeStrategy.discard
case x =>
val oldStrategy = (assemblyMergeStrategy in assembly).value
oldStrategy(x)
libraryDependencies += ("com.google.cloud.spark" %% "spark-bigquery" % "0.18.0")
.exclude("com.google.guava", "guava")
.exclude("org.glassfish.jersey.bundles.repackaged", "jersey-guava")
libraryDependencies += "com.google.guava" % "guava" % "30.0-jre"
libraryDependencies += ("org.apache.spark" % "spark-core_2.11" % "2.3.1")
.exclude("com.google.guava", "guava")
.exclude("org.glassfish.jersey.bundles.repackaged", "jersey-guava")
libraryDependencies += ("org.apache.spark" % "spark-sql_2.11" % "2.3.1")
.exclude("com.google.guava", "guava")
.exclude("org.glassfish.jersey.bundles.repackaged", "jersey-guava")
下面是主类,
object OwnDataSetReader extends App
val session = SparkSession.builder()
.appName("big-query-connector")
.config(getConf)
.getOrCreate()
session.read
.format("com.google.cloud.spark.bigquery")
.option("viewsEnabled", true)
.option("parentProject", "my_gcp_project")
.option("credentialsFile", "<path to private json file>")
.load("my_gcp_data_set.my_gcp_view")
.show(2)
private def getConf : SparkConf =
val sparkConf = new SparkConf
sparkConf.setAppName("biq-query-connector")
sparkConf.setMaster("local[*]")
sparkConf
用于在我的本地终端中启动 Spark 的命令:spark-submit --deploy-mode client --class com.bigquery.OwnDataSetReader BigQueryConnector.jar
。我在本地机器上使用 spark 版本 2.3.x
【问题讨论】:
OwnDataSetReader
中的第 18 行是什么?
为什么要自己编译连接器?在 maven Central 和 gs://spark-lib/bigquery/ 中都有内置的 jars
@DavidRabinowitz ,我们正在构建我们在 Spark 集群上启动的 fat jar。无论如何,我能够解决这个问题。问题在于我们丢弃META-INF
文件夹的build.sbt
文件。在库引导期间使用spark-bigquery
连接器的META-INF
文件夹内的配置文件
太棒了!请将此作为答案发布
【参考方案1】:
我能够解决这个问题。在我的build.sbt
文件中使用了合并策略。
assemblyMergeStrategy in assembly :=
case x if x.startsWith("META-INF") => MergeStrategy.discard
case x =>
val oldStrategy = (assemblyMergeStrategy in assembly).value
oldStrategy(x)
我正在丢弃META-INF
文件夹中的文件。在库引导期间使用spark-bigquery
连接器的META-INF
文件夹内的配置文件。所以,与其放弃,不如改变下面的策略对我有用。
case PathList("META-INF", xs @ _*) =>
(xs map _.toLowerCase) match
case ("manifest.mf" :: Nil) | ("index.list" :: Nil) | ("dependencies" :: Nil) | ("license" :: Nil) | ("licence.txt" :: Nil) | ("notice.txt" :: Nil) | ("notice" :: Nil)=>
MergeStrategy.discard
case ps @ (x :: xs) if ps.last.endsWith(".sf") || ps.last.endsWith(".dsa") || ps.contains("license") || ps.contains("notice") =>
MergeStrategy.discard
case "plexus" :: xs =>
MergeStrategy.discard
case "services" :: xs =>
MergeStrategy.filterDistinctLines
case _ => MergeStrategy.last
【讨论】:
以上是关于运行时异常 java.lang.NoSuchMethodError: com.google.common.base.Optional.toJavaUtil()L 与 Spark-BigQuery 连接的主要内容,如果未能解决你的问题,请参考以下文章