问题
I am trying to achieve this via pyspark building sql. The goal is to combine multiple rows into single row Example: I want to convert this
+-----+----+----+-----+
| col1|col2|col3| col4|
+-----+----+----+-----+
|x | y | z |13::1|
|x | y | z |10::2|
+-----+----+----+-----+
To
+-----+----+----+-----------+
| col1|col2|col3| col4|
+-----+----+----+-----------+
|x | y | z |13::1;10::2|
+-----+----+----+-----------+
回答1:
What you're looking for is the spark-sql version of this answer, which is the following:
query = """
select col1,
col2,
col3,
concat_ws(';', collect_list(col4)) as col4
from some_table
group by col1,
col2,
col3
"""
spark.sql(query).show()
#+----+----+----+-----------+
#|col1|col2|col3| col4|
#+----+----+----+-----------+
#| x| y| z|13::1;10::2|
#+----+----+----+-----------+
But be aware that since spark is distributed, this is not guaranteed to maintain any specific order, unless you explicitly specify the order.
See more:
- collect_list by preserving order based on another variable
- Does collect_list() maintain relative ordering of rows?
- Spark DataFrame: does groupBy after orderBy maintain that order?
回答2:
Expanding upon the suggestion made by @Barmar in a comment, you can run a SQL query like this:
SELECT col1, col2, col3, GROUP_CONCAT(col4)
FROM your_table
GROUP BY col1, col2, col3
来源:https://stackoverflow.com/questions/56026590/combine-multiple-rows-into-a-single-row