Spark DataFrame: Computing row-wise mean (or any aggregate operation)

后端 未结 2 1390
逝去的感伤
逝去的感伤 2020-11-27 06:44

I have a Spark DataFrame loaded up in memory, and I want to take the mean (or any aggregate operation) over the columns. How would I do that? (In numpy, this is

相关标签:
2条回答
  • 2020-11-27 07:08

    All you need here is a standard SQL like this:

    SELECT (US + UK + CAN) / 3 AS mean FROM df
    

    which can be used directly with SqlContext.sql or expressed using DSL

    df.select(((col("UK") + col("US") + col("CAN")) / lit(3)).alias("mean"))
    

    If you have a larger number of columns you can generate expression as follows:

    from functools import reduce
    from operator import add
    from pyspark.sql.functions import col, lit
    
    n = lit(len(df.columns) - 1.0)
    rowMean  = (reduce(add, (col(x) for x in df.columns[1:])) / n).alias("mean")
    
    df.select(rowMean)
    

    or

    rowMean  = (sum(col(x) for x in df.columns[1:]) / n).alias("mean")
    df.select(rowMean)
    

    Finally its equivalent in Scala:

    df.select(df.columns
      .drop(1)
      .map(col)
      .reduce(_ + _)
      .divide(df.columns.size - 1)
      .alias("mean"))
    

    In a more complex scenario you can combine columns using array function and use an UDF to compute statistics:

    import numpy as np
    from pyspark.sql.functions import array, udf
    from pyspark.sql.types import FloatType
    
    combined = array(*(col(x) for x in df.columns[1:]))
    median_udf = udf(lambda xs: float(np.median(xs)), FloatType())
    
    df.select(median_udf(combined).alias("median"))
    

    The same operation expressed using Scala API:

    val combined = array(df.columns.drop(1).map(col).map(_.cast(DoubleType)): _*)
    val median_udf = udf((xs: Seq[Double]) => 
        breeze.stats.DescriptiveStats.percentile(xs, 0.5))
    
    df.select(median_udf(combined).alias("median"))
    

    Since Spark 2.4 an alternative approach is to combine values into an array and apply aggregate expression. See for example Spark Scala row-wise average by handling null.

    0 讨论(0)
  • 2020-11-27 07:32

    in Scala something like this would do it

    val cols = Seq("US","UK","Can")
    f.map(r => (r.getAs[Int]("id"),r.getValuesMap(cols).values.fold(0.0)(_+_)/cols.length)).toDF
    
    0 讨论(0)
提交回复
热议问题