Median / quantiles within PySpark groupBy

后端 未结 5 733
感情败类
感情败类 2020-12-04 15:26

I would like to calculate group quantiles on a Spark dataframe (using PySpark). Either an approximate or exact result would be fine. I prefer a solution that I can use withi

相关标签:
5条回答
  • 2020-12-04 15:52

    Unfortunately, and to the best of my knowledge, it seems that it is not possible to do this with "pure" PySpark commands (the solution by Shaido provides a workaround with SQL), and the reason is very elementary: in contrast with other aggregate functions, such as mean, approxQuantile does not return a Column type, but a list.

    Let's see a quick example with your sample data:

    spark.version
    # u'2.2.0'
    
    import pyspark.sql.functions as func
    from pyspark.sql import DataFrameStatFunctions as statFunc
    
    # aggregate with mean works OK:
    df_grp_mean = df.groupBy('grp').agg(func.mean(df['val']).alias('mean_val'))
    df_grp_mean.show()
    # +---+--------+ 
    # |grp|mean_val|
    # +---+--------+
    # |  B|     5.0|
    # |  A|     2.0|
    # +---+--------+
    
    # try aggregating by median:
    df_grp_med = df.groupBy('grp').agg(statFunc(df).approxQuantile('val', [0.5], 0.1))
    # AssertionError: all exprs should be Column
    
    # mean aggregation is a Column, but median is a list:
    
    type(func.mean(df['val']))
    # pyspark.sql.column.Column
    
    type(statFunc(df).approxQuantile('val', [0.5], 0.1))
    # list
    

    I doubt that a window-based approach will make any difference, since as I said the underlying reason is a very elementary one.

    See also my answer here for some more details.

    0 讨论(0)
  • 2020-12-04 15:53

    I guess you don't need it anymore. But will leave it here for future generations (i.e. me next week when I forget).

    from pyspark.sql import Window
    import pyspark.sql.functions as F
    
    grp_window = Window.partitionBy('grp')
    magic_percentile = F.expr('percentile_approx(val, 0.5)')
    
    df.withColumn('med_val', magic_percentile.over(grp_window))
    

    Or to address exactly your question, this also works:

    df.groupBy('grp').agg(magic_percentile.alias('med_val'))
    

    And as a bonus, you can pass an array of percentiles:

    quantiles = F.expr('percentile_approx(val, array(0.25, 0.5, 0.75))')
    

    And you'll get a list in return.

    0 讨论(0)
  • 2020-12-04 16:05

    The most simple way to do this with pyspark==2.4.5 is:

    df \
        .groupby('grp') \
        .agg(expr('percentile(val, array(0.5))')[0].alias('50%')) \
        .show()
    
    

    output:

    |grp|50%|
    +---+---+
    |  B|5.0|
    |  A|2.0|
    +---+---+
    
    0 讨论(0)
  • 2020-12-04 16:16

    Since you have access to percentile_approx, one simple solution would be to use it in a SQL command:

    from pyspark.sql import SQLContext
    sqlContext = SQLContext(sc)
    
    df.registerTempTable("df")
    df2 = sqlContext.sql("select grp, percentile_approx(val, 0.5) as med_val from df group by grp")
    
    0 讨论(0)
  • 2020-12-04 16:17

    problem of "percentile_approx(val, 0.5)": if e.g. range is [1,2,3,4] this function returns 2 (as median) the function below returns 2.5:

    import statistics
    
    median_udf = F.udf(lambda x: statistics.median(x) if bool(x) else None, DoubleType())
    
    ... .groupBy('something').agg(median_udf(F.collect_list(F.col('value'))).alias('median'))
    
    0 讨论(0)
提交回复
热议问题