Spark:如何在 pyspark 或 scala spark 中分解数据并添加列名?

Posted

技术标签:

【中文标题】Spark:如何在 pyspark 或 scala spark 中分解数据并添加列名?【英文标题】:Spark : How do I exploded data and add column name also in pyspark or scala spark? 【发布时间】:2018-02-12 14:28:02 【问题描述】:

Spark:我想分解多列并合并为单列,列名作为单独的行。

Input data: 
    +-----------+-----------+-----------+
    |   ASMT_ID |   WORKER  |   LABOR   |
    +-----------+-----------+-----------+
    |   1       |   A1,A2,A3|   B1,B2   |
    +-----------+-----------+-----------+
    |   2       |   A1,A4   |   B1      |
    +-----------+-----------+-----------+

Expected Output:


+-----------+-----------+-----------+
|   ASMT_ID |WRK_CODE   |WRK_DETL   |
+-----------+-----------+-----------+
|   1       |   A1      |   WORKER  |
+-----------+-----------+-----------+
|   1       |   A2      |   WORKER  |
+-----------+-----------+-----------+
|   1       |   A3      |   WORKER  |
+-----------+-----------+-----------+
|   1       |   B1      |   LABOR   |
+-----------+-----------+-----------+
|   1       |   B2      |   LABOR   |
+-----------+-----------+-----------+
|   2       |   A1      |   WORKER  |
+-----------+-----------+-----------+
|   2       |   A4      |   WORKER  |
+-----------+-----------+-----------+
|   2       |   B1      |   LABOR   |
+-----------+-----------+-----------+

PFA: Input image

【问题讨论】:

贴出你试过的代码,至少就像把数据加载到spark中一样。 【参考方案1】:

可能不是最好的情况,但您只需要几个explodes 和unionAll

import org.apache.spark.sql.functions._

df1.show
+-------+--------+-----+
|ASMT_ID|  WORKER|LABOR|
+-------+--------+-----+
|      1|A1,A2,A3|B1,B2|
|      2|   A1,A4|   B1|
+-------+--------+-----+

df1.cache

val workers = df1.drop("LABOR")
                 .withColumn("WRK_CODE" , explode(split($"WORKER" , ",") ) )
                 .withColumn("WRK_DETL", lit("WORKER"))
                 .drop("WORKER")

val labors = df1.drop("WORKER")
                .withColumn("WRK_CODE" , explode(split($"LABOR", ",") ) )
                .withColumn("WRK_DETL", lit("LABOR") )
                .drop("LABOR")

workers.unionAll(labors).orderBy($"ASMT_ID".asc , $"WRK_CODE".asc).show

+-------+--------+--------+
|ASMT_ID|WRK_CODE|WRK_DETL|
+-------+--------+--------+
|      1|      A1|  WORKER|
|      1|      A2|  WORKER|
|      1|      A3|  WORKER|
|      1|      B1|   LABOR|
|      1|      B2|   LABOR|
|      2|      A1|  WORKER|
|      2|      A4|  WORKER|
|      2|      B1|   LABOR|
+-------+--------+--------+

【讨论】:

以上是关于Spark:如何在 pyspark 或 scala spark 中分解数据并添加列名?的主要内容,如果未能解决你的问题,请参考以下文章

从 Scala Spark 代码调用 Pyspark 脚本

如何融化 Spark DataFrame?

将 Apache Spark Scala 重写为 PySpark

如何在 AWS EMR 中一起添加 2 个(pyspark、scala)步骤?

将 pyspark 代码移植到 Spark 2.4.3 的 scala 时出现 SparkException

如何在 pyspark 中将 DenseMatrix 转换为 spark DataFrame?