pyspark 从 pyspark sql 数据框创建字典数据
Posted
技术标签:
【中文标题】pyspark 从 pyspark sql 数据框创建字典数据【英文标题】:pyspark create dictionary data from pyspark sql dataframe 【发布时间】:2018-06-02 07:58:10 【问题描述】:有一个具有以下结构的 pyspark.sql.dataframe.DataFrame,并且在下面给出的所有国家/地区的所有月份中都会持续这样:
+----------+-------+------------------+
|DATE |COUNTRY|AVG_TEMPS |
+----------+-------+------------------+
|2007-01-01|Åland |0.5939999999999999|
|2007-02-01|Åland |-4.042 |
|2007-03-01|Åland |2.443 |
|2007-04-01|Åland |4.621 |
|2007-05-01|Åland |8.411 |
|2007-06-01|Åland |13.722999999999999|
|2007-07-01|Åland |15.749 |
+----------+-------+------------------+
预期的输出是一个 python 字典,如下给定的链接:
pyspark - create DataFrame Grouping columns in map type structure
-----------------------------------------
| DATE | COUNTRY_TEMP |
-----------------------------------------
|2007-01-01|Åland: 0.593, Alfredo:2.44|
|2007-01-02| Åland: 0.57, Alfredo:2.14|
-----------------------------------------
当我尝试遵循它时,我得到了一些错误
df_converted = newres.groupBy('DATE').\
agg(collect_list(create_map(col("COUNTRY"))))
错误:
AnalysisException: u"cannot resolve 'map(`COUNTRY`)' due to data type mismatch: map expects a positive even number of arguments.
;;\n'Aggregate [DATE#179], [DATE#179, collect_list(map(COUNTRY#180), 0, 0) AS collect_list(map(COUNTRY))#189]\n+- Project [DATE#146 AS DATE#179,
COUNTRY#85 AS COUNTRY#180, AVG_TEMPS#147 AS AVG_TEMPS#181]\n +- Project [dt#82 AS DATE#146, COUNTRY#85, AverageTemperature#83 AS AVG_TEMPS#147]
\n +- SubqueryAlias global_temps_by_cntry\n +- Relation[dt#82,AverageTemperature#83,AverageTemperatureUncertainty#84,Country#85] csv\n"
有人可以帮忙吗???
【问题讨论】:
create_map 需要一个键列和一个值列,就像您添加的链接中的示例一样 【参考方案1】:如@user3689574 所述,尝试将值添加到 create_map :
df = spark.createDataFrame([('2007-01-01', 'Aland', 0.593), ('2007-01-01', 'Alfredo', 2.44),('2007-01-02', 'Aland', 2.57), ('2007-01-02', 'Alfredo', 2.14)], ['DATE', 'COUNTRY', 'AVG_TEMPS'])
df.show()
+----------+-------+---------+
| DATE |COUNTRY|AVG_TEMPS|
+----------+-------+---------+
|2007-01-01| Aland| 0.593|
|2007-01-01|Alfredo| 2.44|
|2007-01-02| Aland| 2.57|
|2007-01-02|Alfredo| 2.14|
+----------+-------+---------+
from pyspark.sql.functions import collect_list, col, create_map
df2 = df.groupBy("DATE").agg(collect_list( create_map( func.col("COUNTRY"), col("AVG_TEMPS") ) ).alias("COUNTRY_TEMP"))
df2.show(4, False)
+----------+-------------------------------------+
|DATE |COUNTRY_TEMP |
+----------+-------------------------------------+
|2007-01-01|[[Aland -> 0.593], [Alfredo -> 2.44]]|
|2007-01-02|[[Aland -> 2.57], [Alfredo -> 2.14]] |
+----------+-------------------------------------+
【讨论】:
以上是关于pyspark 从 pyspark sql 数据框创建字典数据的主要内容,如果未能解决你的问题,请参考以下文章
pyspark 从 spark 数据框列创建一个不同的列表并在 spark sql where 语句中使用