问题
I have this dataset in spark,
val sales = Seq(
("Warsaw", 2016, "facebook","share",100),
("Warsaw", 2017, "facebook","like",200),
("Boston", 2015,"twitter","share",50),
("Boston", 2016,"facebook","share",150),
("Toronto", 2017,"twitter","like",50)
).toDF("city", "year","media","action","amount")
I can now group this by city and media like this,
val groupByCityAndYear = sales
.groupBy("city", "media")
.count()
groupByCityAndYear.show()
+-------+--------+-----+
| city| media|count|
+-------+--------+-----+
| Boston|facebook| 1|
| Boston| twitter| 1|
|Toronto| twitter| 1|
| Warsaw|facebook| 2|
+-------+--------+-----+
But, how can I do combine media and action together in one column, so the expected output should be,
+-------+--------+-----+
| Boston|facebook| 1|
| Boston| share | 2|
| Boston| twitter| 1|
|Toronto| twitter| 1|
|Toronto| like | 1|
| Warsaw|facebook| 2|
| Warsaw|share | 1|
| Warsaw|like | 1|
+-------+--------+-----+
回答1:
Combine media
and action
columns as array
column, explode
it, then do groupBy
count
:
sales.select(
$"city", explode(array($"media", $"action")).as("mediaAction")
).groupBy("city", "mediaAction").count().show()
+-------+-----------+-----+
| city|mediaAction|count|
+-------+-----------+-----+
| Boston| share| 2|
| Boston| facebook| 1|
| Warsaw| share| 1|
| Boston| twitter| 1|
| Warsaw| like| 1|
|Toronto| twitter| 1|
|Toronto| like| 1|
| Warsaw| facebook| 2|
+-------+-----------+-----+
Or assuming media
and action
doesn't intersect (the two columns don't have common elements):
sales.groupBy("city", "media").count().union(
sales.groupBy("city", "action").count()
).show
+-------+--------+-----+
| city| media|count|
+-------+--------+-----+
| Boston|facebook| 1|
| Boston| twitter| 1|
|Toronto| twitter| 1|
| Warsaw|facebook| 2|
| Boston| share| 2|
| Warsaw| share| 1|
| Warsaw| like| 1|
|Toronto| like| 1|
+-------+--------+-----+
来源:https://stackoverflow.com/questions/50146145/apache-spark-group-by-combining-types-and-sub-types