问题
Spark API RelationalGroupedDataset
has a function agg
:
@scala.annotation.varargs
def agg(expr: Column, exprs: Column*): DataFrame = {
toDF((expr +: exprs).map {
case typed: TypedColumn[_, _] =>
typed.withInputType(df.exprEnc, df.logicalPlan.output).expr
case c => c.expr
})
}
Why does it take two separate arguments? Why can't it take just exprs: Column*
?
Has someone an implicit function that takes one argument?
回答1:
This is to make sure that you specify at least one argument.
Pure varargs cannot do that, you could call the method without any arguments.
回答2:
I tried to imagine how it would be using cats.data.NonEmptyList
(requires cats-core
dependency: libraryDependencies += "org.typelevel" %% "cats-core" % "2.1.1"
):
import cats.data.NonEmptyList
implicit class RelationalGroupedDatasetOps(
private val rgd: RelationalGroupedDataset
) {
def aggOnNonEmpty(nonEmptyColumns: NonEmptyList[Column]): DataFrame =
rgd.agg(nonEmptyColumns.head, nonEmptyColumns.tail:_*)
def aggUnsafe(columnList: List[Column]): DataFrame = {
val nonEmptyColumns = NonEmptyList.fromListUnsafe(columnList)
rgd.agg(nonEmptyColumns.head, nonEmptyColumns.tail:_*)
}
}
For scala 2.12 using std lib List
:
implicit class RelationalGroupedDatasetOps(
private val rgd: RelationalGroupedDataset
) {
def aggUnsafe(aggColumns: List[Column]): DataFrame =
aggColumns match {
case ::(head, tail) => rgd.agg(head, tail:_*)
case Nil => throw new IllegalArgumentException(
"aggColumns parameter can not be empty for aggregation"
)
}
}
using example:
import Implicits.RelationalGroupedDatasetOps
// some data with columns id, category(int), amount(double)
val df: DataFrame = ???
df.groupBy("id")
.aggUnsafe(
df.columns.filter(c => c != "id").map(c => sum(c))
) // returns aggregated DataFrame
来源:https://stackoverflow.com/questions/62062980/why-spark-scala-api-agg-function-takes-expr-and-exprs-arguments