get distinct count from an array of each rows using pyspark

随声附和 提交于 2020-04-16 03:31:14

问题


I am looking for distinct counts from an array of each rows using pyspark dataframe: input: col1 [1,1,1] [3,4,5] [1,2,1,2]

output:
1
3
2  

I used below code but it is giving me the length of an array:
output:
3
3
4

please help me how do i achieve this using python pyspark dataframe.

slen = udf(lambda s: len(s), IntegerType())
count = Df.withColumn("Count", slen(df.col1))
count.show()

Thanks in advanced !

回答1:


For spark2.4+ you can use array_distinct and then just get the size of that, to get count of distinct values in your array. Using UDF will be very slow and inefficient for big data, always try to use spark in-built functions.

https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.array_distinct

(welcome to SO)

df.show()

+------------+
|        col1|
+------------+
|   [1, 1, 1]|
|   [3, 4, 5]|
|[1, 2, 1, 2]|
+------------+

df.withColumn("count", F.size(F.array_distinct("col1"))).show()

+------------+-----+
|        col1|count|
+------------+-----+
|   [1, 1, 1]|    1|
|   [3, 4, 5]|    3|
|[1, 2, 1, 2]|    2|
+------------+-----+


来源:https://stackoverflow.com/questions/60441590/get-distinct-count-from-an-array-of-each-rows-using-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!