问题
I am using Spark 1.6.1 and Java as programming language. The following code was working fine with dataframes:
simpleProf.groupBy(col("col1"), col("col2") )
.agg(
sum("CURRENT_MONTH"),
sum("PREVIOUS_MONTH")
);
But, it does not using datasets, any idea how to do the same with dataset in Java/Spark?
Cheers
回答1:
It does not work, in the sense that after the groupBy I get a GroupedDataset object and when I try to apply the function agg it requires typedColumn instead of column.
Ahh, there was just some confusion on this because of the merging of Dataset and DataFrame in Spark 2.x, where there is a groupBy
which works with relational columns, and groupByKey
which works with typed columns. So, given that you are using an explicit Dataset in 1.6, then the solution is to typify your columns via the .as
method.
sum("CURRENT_MONTH").as[Int]
来源:https://stackoverflow.com/questions/44681510/spark-dataset-group-by-and-sum