在 pyspark 中读取 Column<COLUMN-NAME> 的内容

Posted

技术标签:

【中文标题】在 pyspark 中读取 Column<COLUMN-NAME> 的内容【英文标题】:read content of Column<COLUMN-NAME> in pyspark 【发布时间】:2016-12-22 08:45:29 【问题描述】:

我正在使用 spark 1.5.0

我创建了一个如下所示的数据框,并试图从这里读取一列

>>> words = tokenizer.transform(sentenceData)
>>> words
DataFrame[label: bigint, sentence: string, words: array<string>]
>>> words['words']
Column<words>

我想阅读句子中的所有单词(词汇)。我怎么读这个

编辑 1:错误仍然存​​在

我现在在 spark 2.0.0 中运行它并收到此错误

>>> wordsData.show()
+--------------------+--------------------+
|                desc|               words|
+--------------------+--------------------+
|Virat is good bat...|[virat, is, good,...|
|     sachin was good| [sachin, was, good]|
|but modi sucks bi...|[but, modi, sucks...|
| I love the formulas|[i, love, the, fo...|
+--------------------+--------------------+

>>> wordsData
DataFrame[desc: string, words: array<string>]


>>> vocab = wordsData.select(explode('words')).rdd.flatMap(lambda x: x)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/pyspark/rdd.py", line 305, in flatMap
    return self.mapPartitionsWithIndex(func, preservesPartitioning)
  File "/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/pyspark/rdd.py", line 330, in mapPartitionsWithIndex
    return PipelinedRDD(self, f, preservesPartitioning)
  File "/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/pyspark/rdd.py", line 2383, in __init__
    self._jrdd_deserializer = self.ctx.serializer
AttributeError: 'SparkSession' object has no attribute 'serializer'

编辑解决方案 - 1 - Link

【问题讨论】:

【参考方案1】:

你可以:

from pyspark.sql.functions import explode

words.select(explode('words')).rdd.flatMap(lambda x: x)

【讨论】:

以上是关于在 pyspark 中读取 Column<COLUMN-NAME> 的内容的主要内容,如果未能解决你的问题,请参考以下文章

51Nod 1384 全排列

51nod 1677

模板题-六一儿童节-51nod1875

51nod——1548 欧姆诺姆和糖果

51nod 1643 小Q的家庭作业

51Nod 1279