Sum of array elements depending on value condition pyspark

a 夏天 提交于 2020-01-28 02:31:14

问题


I have a pyspark dataframe:

id   |   column
------------------------------
1    |  [0.2, 2, 3, 4, 3, 0.5]
------------------------------
2    |  [7, 0.3, 0.3, 8, 2,]
------------------------------

I would like to create a 3 columns:

  • Column 1: contain the sum of the elements < 2
  • Column 2: contain the sum of the elements > 2
  • Column 3: contain the sum of the elements = 2 (some times I have duplicate values so I do their sum) In case if I don't have a values I put null.

Expect result:

id   |   column               |  column<2 |  column>2   | culumn=2 
------------------------------|--------------------------------------------  
1    |  [0.2, 2, 3, 4, 3, 0.5]|  [0.7]    |  [12]       |  null
---------------------------------------------------------------------------
2    |  [7, 0.3, 0.3, 8, 2,]  | [0.6]     |  [15]       |  [2]
---------------------------------------------------------------------------

Can you help me please ? Thank you


回答1:


For Spark 2.4+, you can use aggregate and filter higher-order functions like this:

df.withColumn("column<2", expr("aggregate(filter(column, x -> x < 2), 0D, (x, acc) -> acc + x)")) \
  .withColumn("column>2", expr("aggregate(filter(column, x -> x > 2), 0D, (x, acc) -> acc + x)")) \
  .withColumn("column=2", expr("aggregate(filter(column, x -> x == 2), 0D, (x, acc) -> acc + x)")) \
  .show(truncate=False)

Gives:

+---+------------------------------+--------+--------+--------+
|id |column                        |column<2|column>2|column=2|
+---+------------------------------+--------+--------+--------+
|1  |[0.2, 2.0, 3.0, 4.0, 3.0, 0.5]|0.7     |10.0    |2.0     |
|2  |[7.0, 0.3, 0.3, 8.0, 2.0]     |0.6     |15.0    |2.0     |
+---+------------------------------+--------+--------+--------+



回答2:


Here's a way you can try:

import pyspark.sql.functions as F

# using map filter the list and count based on condition
s = (df
     .select('column')
     .rdd
     .map(lambda x: [[i for i in x.column if i < 2], 
                     [i for i in x.column if i > 2], 
                     [i for i in x.column if i == 2]])
     .map(lambda x: [Row(round(sum(i), 2)) for i in x]))
     .toDF(['col<2','col>2','col=2'])

# create a dummy id so we can join both data frames
df = df.withColumn('mid', F.monotonically_increasing_id())
s = s.withColumn('mid', F.monotonically_increasing_id())

#simple left join
df = df.join(s, on='mid').drop('mid').show()

+---+--------------------+-----+------+-----+
| id|              column|col<2| col>2|col=2|
+---+--------------------+-----+------+-----+
|  0|[0.2, 2.0, 3.0, 4...|[0.7]|[10.0]|[2.0]|
|  1|[7.0, 0.3, 0.3, 8...|[0.6]|[15.0]|[2.0]|
+---+--------------------+-----+------+-----+


来源:https://stackoverflow.com/questions/59931770/sum-of-array-elements-depending-on-value-condition-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!