[Spark][Python]Mapping Single Rows to Multiple Pairs

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了[Spark][Python]Mapping Single Rows to Multiple Pairs相关的知识,希望对你有一定的参考价值。

Mapping Single Rows to Multiple Pairs
目的:

把如下的这种数据,

Input Data

00001 sku010:sku933:sku022
00002 sku912:sku331
00003 sku888:sku022:sku010:sku594
00004 sku411


转换为这样:
一个Key值,带的这几个键值,分别罗列:

(00001,sk010)
(00001,sku933)
(00001,sku022)

...
(00002,sku912)
(00002,sku331)
(00003,sku888)

这就是所谓的 Mapping Single Rows to Multiple Pairs

步骤如下:

[[email protected] ~]$ vim act001.txt
[[email protected] ~]$
[[email protected] ~]$ cat act001.txt
00001 ku010:sku933:sku022
00002 sku912:sku331
00003 sku888:sku022:sku010:sku594
00004 sku411
[[email protected] ~]$ hdfs dfs -put act001.txt
[[email protected] ~]$
[[email protected] ~]$ hdfs dfs -cat act001.txt
00001 ku010:sku933:sku022
00002 sku912:sku331
00003 sku888:sku022:sku010:sku594
00004 sku411
[[email protected] ~]$

In [6]: mydata01=mydata.map(lambda line: line.split("\t"))

In [7]: type(mydata01)
Out[7]: pyspark.rdd.PipelinedRDD

In [8]: mydata02=mydata01.map(lambda fields: (fields[0],fields[1]))

In [9]: type(mydata02)
Out[9]: pyspark.rdd.PipelinedRDD

In [10]:

In [11]: mydata03 = mydata02.flatMapValues(lambda skus: skus.split(":"))

In [12]: type(mydata03)
Out[12]: pyspark.rdd.PipelinedRDD

In [13]: mydata03.take(1)
Out[13]: [(u‘00001‘, u‘ku010‘)]

以上是关于[Spark][Python]Mapping Single Rows to Multiple Pairs的主要内容,如果未能解决你的问题,请参考以下文章

python 创建es mapping

python字典构造函数dict(mapping)解析

Dapper 多重映射 - 集合为空

Python:每日一题之《消除游戏》真题练习

python pyspark-transformations-mapping.py

spark 分组后字符串拼接