如何在 Spark SQL 中找到分组向量列的平均值?

Posted

技术标签:

【中文标题】如何在 Spark SQL 中找到分组向量列的平均值?【英文标题】:How to find mean of grouped Vector columns in Spark SQL? 【发布时间】:2017-06-03 13:35:19 【问题描述】:

我通过调用instances.groupBy(instances.col("property_name")) 创建了一个RelationalGroupedDataset

val x = instances.groupBy(instances.col("property_name"))

如何编写user-defined aggregate function 以在每个组上执行Statistics.colStats().mean?

谢谢!

【问题讨论】:

你只是想得到一个列的平均值吗?你能解释一下你期望的输入和输出是什么吗?您提供的链接还缺少什么? 每一行都有一个标签和一个特征向量。我按标签对行进行分组,并希望采用特征向量的向量平均值。我提供的链接中缺少解决方案。 instances.groupBy(instances.col("property_name")).agg(avg("col1"), avg("col2")...) 有什么问题 我必须写(“col i”..“col n”)吗?向量的维数以千计,数以百万计并不少见。 【参考方案1】:

这是另一种方式

from pyspark.sql import types as T
from pyspark.ml.linalg import SparseVector, DenseVector
import pyspark.sql.functions as f

def dense_to_array(v):
 new_array = list([float(x) for x in v])
 return new_array

dense_to_array_udf = f.udf(dense_to_array, T.ArrayType(T.FloatType()))

df = center_data.withColumn('features_array', dense_to_array_udf('features'))

df_agg = df.agg(f.array(*[f.avg(f.col('features_array')[i]) for i in range(len(xx))]).alias("averages"))
df_agg.show()

从https://danvatterott.com/blog/2018/07/08/aggregating-sparse-and-dense-vectors-in-pyspark/得到它

【讨论】:

【参考方案2】:

火花 >= 2.4

你可以使用Summarizer:

import org.apache.spark.ml.stat.Summarizer

val dfNew = df.as[(Int, org.apache.spark.mllib.linalg.Vector)]
  .map  case (group, v) => (group, v.asML) 
  .toDF("group", "features")


dfNew
  .groupBy($"group")
  .agg(Summarizer.mean($"features").alias("means"))
  .show(false)
+-----+--------------------------------------------------------------------+
|group|means                                                               |
+-----+--------------------------------------------------------------------+
|1    |[8.740630742016827E12,2.6124956666260462E14,3.268714653521495E14]   |
|6    |[2.1153266920139112E15,2.07232483974322592E17,6.2715161747245427E17]|
|3    |[6.3781865566442836E13,8.359124419656149E15,1.865567821598214E14]   |
|5    |[4.270201403521642E13,6.561211706745676E13,8.395448246737938E15]    |
|9    |[3.577032684241448E16,2.5432362841314468E16,2.3744826986293008E17]  |
|4    |[2.339253775419023E14,8.517531902022505E13,3.055115780965264E15]    |
|8    |[8.029924756674456E15,7.284873600992855E17,3.08621303029924E15]     |
|7    |[3.2275104122699105E15,7.5472363442090208E16,7.022556624056291E14]  |
|10   |[1.2412562261010224E16,5.741115713769269E15,4.34336779990902E16]    |
|2    |[1.085528901765636E16,7.633370115869126E12,6.952642232477029E11]    |
+-----+--------------------------------------------------------------------+

火花

您不能使用UserDefinedAggregateFunction,但可以使用相同的MultivariateOnlineSummarizer 创建Aggregator

import org.apache.spark.sql.Row
import org.apache.spark.sql.expressions.Aggregator
import org.apache.spark.mllib.linalg.Vector, Vectors
import org.apache.spark.sql.Encoder, Encoders
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer

type Summarizer = MultivariateOnlineSummarizer

case class VectorSumarizer(f: String) extends Aggregator[Row, Summarizer, Vector]
    with Serializable 
  def zero = new Summarizer
  def reduce(acc: Summarizer, x: Row) = acc.add(x.getAs[Vector](f))
  def merge(acc1: Summarizer, acc2: Summarizer) = acc1.merge(acc2)

  // This can be easily generalized to support additional statistics
  def finish(acc: Summarizer) = acc.mean

  def bufferEncoder: Encoder[Summarizer] = Encoders.kryo[Summarizer]
  def outputEncoder: Encoder[Vector] = ExpressionEncoder()

示例用法:

import org.apache.spark.mllib.random.RandomRDDs.logNormalVectorRDD

val df = spark.sparkContext.union((1 to 10).map(i => 
  logNormalVectorRDD(spark.sparkContext, i, 10, 10000, 3, 1).map((i, _))
)).toDF("group", "features")

df
 .groupBy($"group")
 .agg(VectorSumarizer("features").toColumn.alias("means"))
 .show(10, false)

结果:

+-----+---------------------------------------------------------------------+
|group|means                                                                |
+-----+---------------------------------------------------------------------+
|1    |[1.0495089547176625E15,3.057434217141363E13,8.180842267228103E13]    |
|6    |[8.578684690153061E15,1.865830977115807E14,1.0690831496167929E15]    |
|3    |[1.0347016972600206E14,4.952536828257269E15,8.498944924018858E13]    |
|5    |[2.2135916061736424E16,1.5137112888230388E14,8.154750681129871E14]   |
|9    |[6.496030194110956E15,6.2697260327708368E16,3.7282521260607136E16]   |
|4    |[2.4518629692233766E14,1.959083619621557E13,5.278689364420169E13]    |
|8    |[1.806052212008392E16,2.0410654639336184E16,6.409495244104527E15]    |
|7    |[1.32896092658714784E17,1.2074042288752348E15,1.10951746294648096E17]|
|10   |[1.6131199347666342E19,1.24546214832341616E17,8.5265750194040304E16] |
|2    |[4.330324858747168E12,6.19671483053885E12,2.2416578004282832E13]     |
+-----+---------------------------------------------------------------------+

注意

请注意MultivariateOnlineSummarizer 需要“旧式”mllib.linalg.Vector。它不适用于ml.linalg.Vector。要支持这些,您必须convert between new and old types。 就性能而言,您可能会成为better off with RDDs

【讨论】:

以上是关于如何在 Spark SQL 中找到分组向量列的平均值?的主要内容,如果未能解决你的问题,请参考以下文章

如何使用Scala计算Spark中数据框中列的开始索引和结束索引之间的行的平均值?

新的滚动平均值列,按一列分组并找到另一列的滚动平均值

两个数据帧的数组列的平均值并在pyspark中找到最大索引

在 Spark 中使用 Dataframe 获取平均值

如何在 PySpark 中进行分组并查找列的唯一项目 [重复]

Python:如何获取按 id 分组的每列的 n 个最大值的平均值