问题
I want to find the cleanest way to apply the describe
function to a grouped DataFrame (this question can also grow to apply any DF function to a grouped DF)
I tested grouped aggregate pandas UDF with no luck. There's always a way of doing it by passing each statistics inside the agg
function but that's not the proper way.
If we have a sample dataframe:
df = spark.createDataFrame(
[(1, 1.0), (1, 2.0), (2, 3.0), (2, 5.0), (2, 10.0)],
("id", "v"))
The idea would be to do something similar to Pandas:
df.groupby("id").describe()
where the result would be:
v
count mean std min 25% 50% 75% max
id
1 2.0 1.5 0.707107 1.0 1.25 1.5 1.75 2.0
2 3.0 6.0 3.605551 3.0 4.00 5.0 7.50 10.0
Thanks.
回答1:
Try this:
df.groupby("id").agg(F.count('v').alias('count'), F.mean('v').alias('mean'), F.stddev('v').alias('std'), F.min('v').alias('min'), F.expr('percentile(v, array(0.25))')[0].alias('%25'), F.expr('percentile(v, array(0.5))')[0].alias('%50'), F.expr('percentile(v, array(0.75))')[0].alias('%75'), F.max('v').alias('max')).show()
Output:
+---+-----+----+------------------+---+----+---+----+----+
| id|count|mean| std|min| %25|%50| %75| max|
+---+-----+----+------------------+---+----+---+----+----+
| 1| 2| 1.5|0.7071067811865476|1.0|1.25|1.5|1.75| 2.0|
| 2| 3| 6.0| 3.605551275463989|3.0| 4.0|5.0| 7.5|10.0|
+---+-----+----+------------------+---+----+---+----+----+
回答2:
You would run this:
df.groupby("id").describe('uniform', 'normal').show()
It's fairly self-explanatory.
来源:https://stackoverflow.com/questions/57083814/how-to-apply-the-describe-function-after-grouping-a-pyspark-dataframe