Aggregate over column arrays in DataFrame in PySpark?

半城伤御伤魂 提交于 2020-01-31 18:13:08

问题


Let's say I have the following DataFrame:

[Row(user='bob', values=[0.5, 0.3, 0.2]),
Row(user='bob', values=[0.1, 0.3, 0.6]),
Row(user='bob', values=[0.8, 0.1, 0.1])]

I would like to groupBy user and do something like avg(values) where the average is taken over each index of the array values like this:

[Row(user='bob', avgerages=[0.466667, 0.233333, 0.3])]

How can I do this in PySpark?


回答1:


You can expand array and compute average for each index.

Python

from pyspark.sql.functions import array, avg, col

n = len(df.select("values").first()[0])

df.groupBy("user").agg(
    array(*[avg(col("values")[i]) for i in range(n)]).alias("averages")
)

Scala

import spark.implicits._
import org.apache.spark.functions.{avg, size}

val df = Seq(
  ("bob", Seq(0.5, 0.3, 0.2)),
  ("bob", Seq(0.1, 0.3, 0.6))
).toDF("user", "values")

val n = df.select(size($"values")).as[Int].first
val values = (0 to n).map(i => $"values"(i))

df.select($"user" +: values: _*).groupBy($"user").avg()


来源:https://stackoverflow.com/questions/38982231/aggregate-over-column-arrays-in-dataframe-in-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!