如何从 PySpark 中的向量结构中获取项目
Posted
技术标签:
【中文标题】如何从 PySpark 中的向量结构中获取项目【英文标题】:How to get item from vector struct in PySpark 【发布时间】:2020-02-27 10:46:24 【问题描述】:我正在尝试从 TF-IDF 结果向量中获取分数数组。 例如:
rescaledData.select("words", "features").show()
+-----------------------------+---------------------------------------------------------------------------------------------+
|words |features |
+-----------------------------+---------------------------------------------------------------------------------------------+
|[a, b, c] |(4527,[0,1,31],[0.6363067860791387,1.0888040725098247,4.371858972705023]) |
|[d] |(4527,[8],[2.729945780576634]) |
+-----------------------------+---------------------------------------------------------------------------------------------+
rescaledData.select(rescaledData['features'].getItem('values')).show()
但是我得到了一个错误,而不是数组。
AnalysisException: u"Can't extract value from features#1786: need struct type but got struct<type:tinyint,size:int,indices:array<int>,values:array<double>>;"
我想要的是
+--------------------------+-----------------------------------------------------------+
|words |features |
+--------------------------+-----------------------------------------------------------+
|[a, b, c] |[0.6363067860791387, 1.0888040725098247, 4.371858972705023]|
+--------------------------+-----------------------------------------------------------+
如何解决这个问题?
【问题讨论】:
【参考方案1】:另一种选择是创建一个 udf 以从稀疏向量中获取值:
from pyspark.sql.functions import udf
from pyspark.sql.types import DoubleType, ArrayType
sparse_values = udf(lambda v: v.values.tolist(), ArrayType(DoubleType()))
df.withColumn("features", sparse_values("features")).show(truncate=False)
+---------+-----------------------------------------------------------+
|word |features |
+---------+-----------------------------------------------------------+
|[a, b, c]|[0.6363067860791387, 1.0888040725098247, 4.371858972705023]|
|[d] |[2.729945780576634] |
+---------+-----------------------------------------------------------+
【讨论】:
【参考方案2】:准备数据
from pyspark.ml.linalg import Vectors, SparseVector
from pyspark.sql import Row
df = spark.createDataFrame(
[
[["a","b","c"], SparseVector(4527, 0:0.6363067860791387, 1:1.0888040725098247, 31:4.371858972705023)],
[["d"], SparseVector(4527, 8: 2.729945780576634)],
], ["word", "features"])
使用rdd获取sparsevector的值
df.rdd.map(lambda x: Row(word=x["word"], features=x["features"].values.tolist())).toDF().show()
+--------------------+---------+
| features| word|
+--------------------+---------+
|[0.63630678607913...|[a, b, c]|
| [2.729945780576634]| [d]|
+--------------------+---------+
【讨论】:
以上是关于如何从 PySpark 中的向量结构中获取项目的主要内容,如果未能解决你的问题,请参考以下文章
如何从 PySpark 中的 JavaSparkContext 获取 SparkContext?
python - 如何将密集向量的RDD转换为pyspark中的DataFrame?