将数组列拆分为行pyspark
Posted
技术标签:
【中文标题】将数组列拆分为行pyspark【英文标题】:split a array columns into rows pyspark 【发布时间】:2018-01-17 10:30:55 【问题描述】:我有一个类似于以下内容的DataFrame
:
new_df = spark.createDataFrame([
([['hello', 'productcode'], ['red','color']], 7),
([['hi', 'productcode'], ['blue', 'color']], 8),
([['hoi', 'productcode'], ['black','color']], 7)
], ["items", "frequency"])
new_df.show(3, False)
# +------------------------------------------------------------+---------+
# |items |frequency|
# +------------------------------------------------------------+---------+
# |[WrappedArray(hello, productcode), WrappedArray(red, color)]|7 |
# |[WrappedArray(hi, productcode), WrappedArray(blue, color)] |8 |
# |[WrappedArray(hoi, productcode), WrappedArray(black, color)]|7 |
# +------------------------------------------------------------+---------+
我需要生成一个新的DataFrame
,类似于以下内容:
# +-------------------------------------------
# |productcode | color |frequency|
# +-------------------------------------------
# |hello | red | 7 |
# |hi | blue | 8 |
# |hoi | black | 7 |
# +--------------------------------------------
【问题讨论】:
new_df.select(col("items").getItem(0).getItem(0).alias('productcode'),col("items").getItem(1).getItem(0).alias('color'),col("frequency")).show()
【参考方案1】:
您可以将项目转换为map
:
from pyspark.sql.functions import *
from operator import itemgetter
@udf("map<string, string>")
def as_map(vks):
return k: v for v, k in vks
remapped = new_df.select("frequency", as_map("items").alias("items"))
收集钥匙:
keys = remapped.select("items").rdd \
.flatMap(lambda x: x[0].keys()).distinct().collect()
然后选择:
remapped.select([col("items")[key] for key in keys] + ["frequency"])
+------------+------------------+---------+
|items[color]|items[productcode]|frequency|
+------------+------------------+---------+
| red| hello| 7|
| blue| hi| 8|
| black| hoi| 7|
+------------+------------------+---------+
【讨论】:
感谢您的回复。但我的数据框有 3 个元素,预期结果不同。我真的不需要在行中再次列名以上是关于将数组列拆分为行pyspark的主要内容,如果未能解决你的问题,请参考以下文章