使用 Python Spark 从 Hadoop 表中提取 Json 对象的所有键
Posted
技术标签:
【中文标题】使用 Python Spark 从 Hadoop 表中提取 Json 对象的所有键【英文标题】:Extract all keys from Json object from Hadoop Table using Python Spark 【发布时间】:2020-01-30 16:29:14 【问题描述】:我有一个名为 table_with_json_string
的 Hadoop 表
例如:
+-----------------------------------+---------------------------------+
| creation_date | json_string_colum |
+-----------------------------------+---------------------------------+
| 2020-01-29 | "keys : 1 : 'a', 2 : 'b' " |
+-----------------------------------+---------------------------------+
想要的输出:
+-----------------------------------+----------------------------------+----------+
| creation_date | json_string_colum | keys |
+-----------------------------------+----------------------------------+----------+
| 2020-01-29 | "keys : 1 : 'a', 2 : 'b' " | 1 |
| 2020-01-29 | "keys : 1 : 'a', 2 : 'b' " | 2 |
+-----------------------------------+----------------------------------+----------+
我尝试过:
from pyspark.sql import functions as sf
from pyspark.sql import types as st
from pyspark.sql.functions import from_json, col,explode
from pyspark.sql.types import StructType, StructField, StringType,MapType
schema = StructType([StructField("keys",
MapType(StringType(),StringType()),True)])
df = spark.table('table_with_json_string').select(col("creation_date"),col("json_string_colum"))
df = df.withColumn("map_json_column", from_json("json_string_colum",schema))
df.show(1,False)
+--------------------+-------------------------------------+----------------------------------+
| creation_date| json_string_colum | map_json_column |
+--------------------+-------------------------------------+----------------------------------+
| 2020-01-29 | "keys : 1 : 'a', 2 : 'b' " | [Map(1 ->'a',2 ->'b')] |
+--------------------+-------------------------------------+----------------------------------+
1 - 我如何从这个MapType
对象中提取密钥?我明白我需要使用 explode
函数来达到我想要的表格格式,但我仍然不知道如何将 JSON 对象的键提取为数组格式。
如果更容易实现我的目标,我愿意接受其他方法。
【问题讨论】:
【参考方案1】:在您目前所做的基础上,您可以获得以下密钥:
from pyspark.sql import functions as f
df = (df
.withColumn("map_json_column", f.from_json("json_string_colum",schema))
.withColumn("keys", f.map_keys("map_json_column.keys"))
.drop("map_json_column")
.withColumn("keys", f.explode("keys"))
)
结果:
+-------------+--------------------+----+
|creation_date| json_string_colum|keys|
+-------------+--------------------+----+
| 2020-01-29|"keys" : "1" : ...| 1|
| 2020-01-29|"keys" : "1" : ...| 2|
+-------------+--------------------+----+
以下是获得上述答案的详细步骤:
>>> from pyspark.sql import functions as f
>>> df.show()
+-------------+--------------------+
|creation_date| json_string_colum|
+-------------+--------------------+
| 2020-01-29|"keys" : "1" : ...|
+-------------+--------------------+
>>> df.withColumn("map_json_column", f.from_json("json_string_colum",schema)).show()
+-------------+--------------------+------------------+
|creation_date| json_string_colum| map_json_column|
+-------------+--------------------+------------------+
| 2020-01-29|"keys" : "1" : ...|[[1 -> a, 2 -> b]]|
+-------------+--------------------+------------------+
>>> df.withColumn("map_json_column", f.from_json("json_string_colum",schema)).withColumn("keys", f.map_keys("map_json_column.keys")).show()
+-------------+--------------------+------------------+------+
|creation_date| json_string_colum| map_json_column| keys|
+-------------+--------------------+------------------+------+
| 2020-01-29|"keys" : "1" : ...|[[1 -> a, 2 -> b]]|[1, 2]|
+-------------+--------------------+------------------+------+
>>> df.withColumn("map_json_column", f.from_json("json_string_colum",schema)).withColumn("keys", f.map_keys("map_json_column.keys")).drop("map_json_column").show()
+-------------+--------------------+------+
|creation_date| json_string_colum| keys|
+-------------+--------------------+------+
| 2020-01-29|"keys" : "1" : ...|[1, 2]|
+-------------+--------------------+------+
>>> df.withColumn("map_json_column", f.from_json("json_string_colum",schema)).withColumn("keys", f.map_keys("map_json_column.keys")).drop("map_json_column").withColumn("keys", f.explode("keys")).show()
+-------------+--------------------+----+
|creation_date| json_string_colum|keys|
+-------------+--------------------+----+
| 2020-01-29|"keys" : "1" : ...| 1|
| 2020-01-29|"keys" : "1" : ...| 2|
+-------------+--------------------+----+
需要明确的是,我上面使用的函数 map_keys 在 PySpark 2.3+ 中可用
【讨论】:
快速仅供参考,map_keys
函数从 2.3 版本开始提供 > 参考:spark.apache.org/docs/latest/api/python/…
@AlvaroJoao 是的。为了清楚起见,我将其添加到我的答案中以上是关于使用 Python Spark 从 Hadoop 表中提取 Json 对象的所有键的主要内容,如果未能解决你的问题,请参考以下文章
无法序列化类 org.apache.hadoop.io.DoubleWritable - MongoDB Hadoop 连接器 + Spark + Python