在 pyspark 中,如何创建一个数组列,它是两个或多个数组列的总和?
Posted
技术标签:
【中文标题】在 pyspark 中,如何创建一个数组列,它是两个或多个数组列的总和?【英文标题】:In pyspark how to create an array column that is a summation of two or more array columns? 【发布时间】:2021-12-29 14:16:41 【问题描述】:我的 pyspark 数据框中有几个 array
类型列和 DenseVector
类型列。我想创建新列,这些列是这些列的元素添加。下面是总结问题的代码:
设置:
from pyspark.sql import SparkSession
from pyspark.sql.functions import col
from pyspark.ml.functions import vector_to_array
from pyspark.ml.linalg import VectorUDT, DenseVector
from pyspark.sql.functions import udf, array, lit
spark = SparkSession.builder.getOrCreate()
data = [(1,4),(2,5),(3,6)]
a = spark.createDataFrame(data)
f = udf(lambda x: DenseVector(x), returnType=VectorUDT())
import pyspark.sql.functions as F
@F.udf(returnType=VectorUDT())
def add_cons_dense_col(val):
return DenseVector(val)
a=a.withColumn('d1', add_cons_dense_col(F.array([F.lit(1.), F.lit(1.)])))
a=a.withColumn('d2', add_cons_dense_col(F.array([F.lit(1.), F.lit(1.)])))
a=a.withColumn('l1', F.array([F.lit(1.), F.lit(1.)]))
a=a.withColumn('l2', F.array([F.lit(1.), F.lit(1.)]))
a.show()
output:
+---+---+---------+---------+----------+----------+
| _1| _2| d1| d2| l1| l2|
+---+---+---------+---------+----------+----------+
| 1| 4|[1.0,1.0]|[1.0,1.0]|[1.0, 1.0]|[1.0, 1.0]|
| 2| 5|[1.0,1.0]|[1.0,1.0]|[1.0, 1.0]|[1.0, 1.0]|
| 3| 6|[1.0,1.0]|[1.0,1.0]|[1.0, 1.0]|[1.0, 1.0]|
+---+---+---------+---------+----------+----------+
我可以对_1
,_2
执行以下操作,效果相同
a.withColumn('l_sum', a._1+a._2)
a.withColumn('l_sum', a['_1']+a['_2'])
a.withColumn('l_sum', col('_1') + col('_2'))
我希望能够对d1
、d2
和l1
、l2
执行添加操作。但是这三种方法都失败了。我正在寻找按元素添加数组或 DenseVectors:
例如:
a.withColumn('l_sum', a.d1+a.d2).show()
a.withColumn('l_sum', a['d1']+a['d2']).show()
a.withColumn('l_sum', col('d1') + col('d2')).show()
但我明白了:
output:
~/miniconda3/envs/pyspark/lib/python3.9/site-packages/pyspark/sql/dataframe.py in withColumn(self, colName, col)
2476 if not isinstance(col, Column):
2477 raise TypeError("col should be Column")
-> 2478 return DataFrame(self._jdf.withColumn(colName, col._jc), self.sql_ctx)
2479
2480 def withColumnRenamed(self, existing, new):
~/miniconda3/envs/pyspark/lib/python3.9/site-packages/py4j/java_gateway.py in __call__(self, *args)
1307
1308 answer = self.gateway_client.send_command(command)
-> 1309 return_value = get_return_value(
1310 answer, self.gateway_client, self.target_id, self.name)
1311
~/miniconda3/envs/pyspark/lib/python3.9/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
115 # Hide where the exception came from that shows a non-Pythonic
116 # JVM exception message.
--> 117 raise converted from None
118 else:
119 raise
AnalysisException: cannot resolve '(d1 + d2)' due to data type mismatch: '(d1 + d2)' requires (numeric or interval or interval day to second or interval year to month) type, not struct<type:tinyint,size:int,indices:array<int>,values:array<double>>;
'Project [_1#0L, _2#1L, d1#5, d2#10, l1#15, l2#21, (d1#5 + d2#10) AS l_sum#365]
+- Project [_1#0L, _2#1L, d1#5, d2#10, l1#15, array(1.0, 1.0) AS l2#21]
+- Project [_1#0L, _2#1L, d1#5, d2#10, array(1.0, 1.0) AS l1#15]
+- Project [_1#0L, _2#1L, d1#5, add_cons_dense_col(array(1.0, 1.0)) AS d2#10]
+- Project [_1#0L, _2#1L, add_cons_dense_col(array(1.0, 1.0)) AS d1#5]
+- LogicalRDD [_1#0L, _2#1L], false
你能帮我创建一个按元素添加数组类型列或 DenseVector 类型列的列
【问题讨论】:
【参考方案1】:火花 2.4
Spark 并非都允许使用表达式在 Vector
上应用本机操作。因此,需要UDF
。
对于数组的元素求和,我们可以使用arrays_zip
将数组压缩在一起,然后应用Higher Order Function - Transform 对压缩后的数组求和。
@F.udf(returnType=VectorUDT())
def sum_vector(v1: VectorUDT, v2: VectorUDT) -> VectorUDT:
return v1 + v2
(a.withColumn("vector_sum", sum_vector(F.col("d1"), F.col("d2")))
.withColumn("array_sum", F.expr("transform(arrays_zip(l1, l2), x -> x.l1 + x.l2)"))
).show()
"""
+---+---+---------+---------+----------+----------+----------+----------+
| _1| _2| d1| d2| l1| l2|vector_sum| array_sum|
+---+---+---------+---------+----------+----------+----------+----------+
| 1| 4|[1.0,1.0]|[1.0,1.0]|[1.0, 1.0]|[1.0, 1.0]| [2.0,2.0]|[2.0, 2.0]|
| 2| 5|[1.0,1.0]|[1.0,1.0]|[1.0, 1.0]|[1.0, 1.0]| [2.0,2.0]|[2.0, 2.0]|
| 3| 6|[1.0,1.0]|[1.0,1.0]|[1.0, 1.0]|[1.0, 1.0]| [2.0,2.0]|[2.0, 2.0]|
+---+---+---------+---------+----------+----------+----------+----------+
"""
Spark 3.1+
在 Spark 3.0 中,引入了 vector_to_array
和 array_to_vector
函数,使用这些函数可以在不使用 UDF 的情况下通过将向量转换为数组来完成向量求和。在 Spark 3.1 中,zip_with
可用于对 2 个数组应用元素明智的操作。
from pyspark.sql import Column
from pyspark.ml.functions import vector_to_array, array_to_vector
def array_sum_expression_builder(c1: Column, c2: Column) -> Column:
return F.zip_with(c1, c2, lambda x, y: x + y)
result = (a.withColumn("vector_sum", array_to_vector(
array_sum_expression_builder(
vector_to_array(F.col("d1")),
vector_to_array(F.col("d2")))))
.withColumn("array_sum", array_sum_expression_builder(F.col("l1"), F.col("l2")))
)
result.show()
"""
+---+---+---------+---------+----------+----------+----------+----------+
| _1| _2| d1| d2| l1| l2|vector_sum| array_sum|
+---+---+---------+---------+----------+----------+----------+----------+
| 1| 4|[1.0,1.0]|[1.0,1.0]|[1.0, 1.0]|[1.0, 1.0]| [2.0,2.0]|[2.0, 2.0]|
| 2| 5|[1.0,1.0]|[1.0,1.0]|[1.0, 1.0]|[1.0, 1.0]| [2.0,2.0]|[2.0, 2.0]|
| 3| 6|[1.0,1.0]|[1.0,1.0]|[1.0, 1.0]|[1.0, 1.0]| [2.0,2.0]|[2.0, 2.0]|
+---+---+---------+---------+----------+----------+----------+----------+
"""
result.printSchema()
"""
root
|-- _1: long (nullable = true)
|-- _2: long (nullable = true)
|-- d1: vector (nullable = true)
|-- d2: vector (nullable = true)
|-- l1: array (nullable = false)
| |-- element: double (containsNull = false)
|-- l2: array (nullable = false)
| |-- element: double (containsNull = false)
|-- vector_sum: vector (nullable = true)
|-- array_sum: array (nullable = false)
| |-- element: double (containsNull = true)
"""
【讨论】:
【参考方案2】:对于元素总和,您可以使用:
a = (a
.withColumn('elementWiseSum', F.expr('transform(l1, (element, index) -> element + element_at(l2, index + 1))'))
)
a.show()
【讨论】:
以上是关于在 pyspark 中,如何创建一个数组列,它是两个或多个数组列的总和?的主要内容,如果未能解决你的问题,请参考以下文章
如何通过在 PySpark 中选择 struct-array 列的一个字段来提取数组列