Spark 2.2.0独立模式将Dataframe写入本地单节点Kafka时出错

Posted

技术标签:

【中文标题】Spark 2.2.0独立模式将Dataframe写入本地单节点Kafka时出错【英文标题】:Error when Spark 2.2.0 standalone mode write Dataframe to local single-node Kafka 【发布时间】:2018-03-09 07:17:59 【问题描述】:

数据来源来自Databricks Notebook demo:Five Spark SQL Helper Utility Functions to Extract and Explore Complex Data Types!

但是当我在自己的笔记本电脑上尝试这些代码时,我总是会出错。

首先,将 JSON 数据加载为 DataFrame

res2: org.apache.spark.sql.DataFrame = [battery_level: string, c02_level: string]

scala> res2.show
+-------------+---------+
|battery_level|c02_level|
+-------------+---------+
|            7|      886|
|            5|     1378|
|            8|      917|
|            8|     1504|
|            8|      831|
|            9|     1304|
|            8|     1574|
|            9|     1208|
+-------------+---------+

二、write数据到Kafka:

res2.write 
  .format("kafka") 
  .option("kafka.bootstrap.servers", "localhost:9092") 
  .option("topic", "test") 
  .save()

所有这些都遵循上面的笔记本演示和官方steps

但错误显示:

scala> res2.write 
         .format("kafka") 
         .option("kafka.bootstrap.servers", "localhost:9092") 
         .option("topic", "iot-devices") 
         .save()
org.apache.spark.sql.AnalysisException: Required attribute 'value' not found;
  at org.apache.spark.sql.kafka010.KafkaWriter$$anonfun$6.apply(KafkaWriter.scala:72)
  at org.apache.spark.sql.kafka010.KafkaWriter$$anonfun$6.apply(KafkaWriter.scala:72)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.kafka010.KafkaWriter$.validateQuery(KafkaWriter.scala:71)
  at org.apache.spark.sql.kafka010.KafkaWriter$.write(KafkaWriter.scala:87)
  at org.apache.spark.sql.kafka010.KafkaSourceProvider.createRelation(KafkaSourceProvider.scala:165)
  at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
  at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
  at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
  at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:610)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:233)
  ... 52 elided

我假设可能是 Kafka 的问题,然后我测试了来自 Kafka 的 DataFrame read 以确保连接性:

scala> val kaDF = spark.read
         .format("kafka") 
         .option("kafka.bootstrap.servers", "localhost:9092") 
         .option("subscribe", "iot-devices") 
         .load()
kaDF: org.apache.spark.sql.DataFrame = [key: binary, value: binary ... 5 more fields]

scala> kaDF.show
+----+--------------------+-----------+---------+------+--------------------+-------------+
| key|               value|      topic|partition|offset|           timestamp|timestampType|
+----+--------------------+-----------+---------+------+--------------------+-------------+
|null|    [73 73 73 73 73]|iot-devices|        0|     0|2017-09-27 11:11:...|            0|
|null|[64 69 63 6B 20 3...|iot-devices|        0|     1|2017-09-27 11:29:...|            0|
|null|       [78 69 78 69]|iot-devices|        0|     2|2017-09-27 11:29:...|            0|
|null|[31 20 32 20 33 2...|iot-devices|        0|     3|2017-09-27 11:30:...|            0|
+----+--------------------+-----------+---------+------+--------------------+-------------+

因此,结果表明从 Kafka bootstrap.servers localhost:9092 读取主题“iot-devices”中的数据确实有效。

网上查了很多,还是没解决?

任何有 Spark SQL 经验的人都可以告诉我我的命令有什么问题吗?

谢谢!

【问题讨论】:

【参考方案1】:

错误信息清楚地表明了问题的根源:

org.apache.spark.sql.AnalysisException:找不到所需的属性“值”;

Dataset 要写成has to have at least value column(以及可选的keytopic)和res2 只有battery_levelc02_level

例如,您可以:

import org.apache.spark.sql.functions._

res2.select(to_json(struct($"battery_level", "c02_level")).alias("value"))
  .writeStream
  ...

【讨论】:

以上是关于Spark 2.2.0独立模式将Dataframe写入本地单节点Kafka时出错的主要内容,如果未能解决你的问题,请参考以下文章

Apache Spark 2.2.0 中文文档 - 集群模式概述 | ApacheCN

如何在 2.2.0 中获取给定 Apache Spark Dataframe 的 Cassandra cql 字符串?

在 Spark DataFrame 上保存到 JSON 并重新加载,模式列序列发生变化

生成 Spark 模式代码/持久化和重用模式

Spark 独立模式多个 shell 会话(应用程序)

Spark——DataFrame与RDD互操作方式