I have a Spark dataframe with the following data (I use spark-csv to load the data in):
key,value
1,10
2,12
3,0
1,20
I think user goks missed out on some part in the code. Its not a tested code.
.map should have been used to convert the rdd to a pairRDD using .map(lambda x: (x,1)).reduceByKey. ....
reduceByKey is not available on a single value rdd or regular rdd but pairRDD.
Thx
If you don't care about column names you can use groupBy
followed by sum
:
df.groupBy($"key").sum("value")
otherwise it is better to replace sum
with agg
:
df.groupBy($"key").agg(sum($"value").alias("value"))
Finally you can use raw SQL:
df.registerTempTable("df")
sqlContext.sql("SELECT key, SUM(value) AS value FROM df GROUP BY key")
See also DataFrame / Dataset groupBy behaviour/optimization
How about this? I agree this still converts to rdd then to dataframe.
df.select('key','value').map(lambda x: x).reduceByKey(lambda a,b: a+b).toDF(['key','value'])