Spark dataframe reducebykey like operation

前端 未结 3 814
星月不相逢
星月不相逢 2021-02-08 11:40

I have a Spark dataframe with the following data (I use spark-csv to load the data in):

key,value
1,10
2,12
3,0
1,20


        
相关标签:
3条回答
  • 2021-02-08 11:58

    I think user goks missed out on some part in the code. Its not a tested code.

    .map should have been used to convert the rdd to a pairRDD using .map(lambda x: (x,1)).reduceByKey. ....

    reduceByKey is not available on a single value rdd or regular rdd but pairRDD.

    Thx

    0 讨论(0)
  • 2021-02-08 12:10

    If you don't care about column names you can use groupBy followed by sum:

    df.groupBy($"key").sum("value")
    

    otherwise it is better to replace sum with agg:

    df.groupBy($"key").agg(sum($"value").alias("value"))
    

    Finally you can use raw SQL:

    df.registerTempTable("df")
    sqlContext.sql("SELECT key, SUM(value) AS value FROM df GROUP BY key")
    

    See also DataFrame / Dataset groupBy behaviour/optimization

    0 讨论(0)
  • 2021-02-08 12:18

    How about this? I agree this still converts to rdd then to dataframe.

    df.select('key','value').map(lambda x: x).reduceByKey(lambda a,b: a+b).toDF(['key','value'])
    
    0 讨论(0)
提交回复
热议问题