通过读取具有不同数据类型的 Scala 序列来创建 Spark 数据帧
Posted
技术标签:
【中文标题】通过读取具有不同数据类型的 Scala 序列来创建 Spark 数据帧【英文标题】:Create a Spark dataframe by reading a Scala sequence having different datatypes 【发布时间】:2019-07-27 06:15:13 【问题描述】:我想通过使用 Scala 读取 Seq 来创建 Spark 数据帧。 seq的数据类型有String、Dataframe、Long和Date类型。
我尝试应用以下方法,但出现了一些错误,可能不是处理问题的正确方法。
val Total_Record_Count = TotalRecordDF.count // geting count total number by reading a dataframe
val Rejected_Record_Count = rejectDF.count // geting count total number by reading a dataframe
val Batch_Run_ID = spark.range(1).select(unix_timestamp as "current_timestamp")
case class JobRunDetails(Job_Name: String, Batch_Run_ID: DataFrame, Source_Entity_Name: String, Total_Record_Count: Long, Rejected_Record_Count: Long, Reject_Record_File_Path: String,Load_Date: String)
val inputSeq = Seq(JobRunDetails("HIT", Batch_Run_ID, "HIT", Total_Record_Count, Rejected_Record_Count, "blob.core.windows.net/feedlayer", Load_Date))
我试过了
val df = sc.parallelize(inputSeq).toDF()
但它抛出错误“java.lang.UnsupportedOperationException: No Encoder found for org.apache.spark.sql.DataFrame”
我只想通过读取序列来创建一个数据框。 任何帮助将不胜感激。 注意:- 我使用的是 Databricks Spark 2.3 版本。
【问题讨论】:
【参考方案1】:通常我们使用 Java/Scala 原始类型创建案例类。还没有看到有人使用 DataFrame 作为成员元素之一创建案例类。
如果我正确地满足了您的要求..这就是您要寻找的 -
case class JobRunDetails(Job_Name: String, Batch_Run_ID: Int, Source_Entity_Name: String, Total_Record_Count: Long, Rejected_Record_Count: Long, Reject_Record_File_Path: String, Load_Date: String)
//defined class JobRunDetails
import spark.implicits._
val Total_Record_Count = 100 //TotalRecordDF.count // geting count total number by reading a dataframe
val Rejected_Record_Count = 200 //rejectDF.count // geting count total number by reading a dataframe
val Batch_Run_ID = spark.range(1).select(unix_timestamp as "current_timestamp").take(1).head.get(0).toString().toInt
val Load_Date = "2019-27-07"
val inputRDD: RDD[JobRunDetails] = spark.sparkContext.parallelize(Seq(JobRunDetails("HIT", Batch_Run_ID, "HIT", Total_Record_Count, Rejected_Record_Count, "blob.core.windows.net/feedlayer", Load_Date)))
inputRDD.toDF().show
/**
import spark.implicits._
Total_Record_Count: Int = 100
Rejected_Record_Count: Int = 200
Batch_Run_ID: Int = 1564224156
Load_Date: String = 2019-27-07
inputRDD: org.apache.spark.rdd.RDD[JobRunDetails] = ParallelCollectionRDD[3] at parallelize at command-330223868839989:6
*/
+--------+------------+------------------+------------------+---------------------+-----------------------+----------+
|Job_Name|Batch_Run_ID|Source_Entity_Name|Total_Record_Count|Rejected_Record_Count|Reject_Record_File_Path| Load_Date|
+--------+------------+------------------+------------------+---------------------+-----------------------+----------+
| HIT| 1564224156| HIT| 100| 200| blob.core.windows...|2019-27-07|
+--------+------------+------------------+------------------+---------------------+-----------------------+----------+
【讨论】:
非常感谢 ValaravausBlack :)以上是关于通过读取具有不同数据类型的 Scala 序列来创建 Spark 数据帧的主要内容,如果未能解决你的问题,请参考以下文章