Grouping pyspark dataframe by intersection [duplicate]

亡梦爱人 提交于 2020-01-30 10:59:52

问题


I need to group PySpark dataframe by intersection of arrays in column. For example from dataframe like this:

v1 | [1, 2, 3]
v2 | [4, 5]
v3 | [1, 7]

result should be:

[v1, v3] | [1, 2, 3, 7]
[v2] | [4, 5]

Because rows 1st and 3rd have value 1 in common.

Is there a method like group by when intersection?

Thank you in advance for ideas and suggestions how to solve this.


回答1:


from pyspark.sql import functions as F

df = spark.createDataFrame([["v1", [1,2,3]], ["v2", [4,5]], ["v3",[1,7]]],["id","arr"])

df1= df.select("*", F.explode("arr").alias("explode_arr")).groupBy("explode_arr").agg(F.collect_set("id").alias("ids"))

df2=df.select("*", F.explode("arr").alias("explode_arr")).join(df1, ["explode_arr"],\
    "inner").groupBy("ids").agg(F.collect_set("arr").alias("array_set")).\
    select("ids",F.array_distinct(F.expr("flatten(array_set)")).alias("intersection_arrays"))

df3= df2.where(F.size("ids")>1).select(F.explode("ids").alias("ids")).select(F.array("ids").alias("ids"))

df4= df2.join(df3.withColumn("flag", F.lit(1)),["ids"],"left_outer").where(F.col("flag").isNull()).drop("flag")

df4.show()

+--------+-------------------+
|     ids|intersection_arrays|
+--------+-------------------+
|    [v2]|             [4, 5]|
|[v3, v1]|       [1, 7, 2, 3]|
+--------+-------------------+ 


来源:https://stackoverflow.com/questions/56726243/grouping-pyspark-dataframe-by-intersection

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!