apark spark map does not parallel well

后端 未结 0 712
北海茫月
北海茫月 2020-12-30 10:21

I want to use spark to do a 2-stage job. pseudo-code like this.

# aggregate job
line = sc.textFile(input_file);
agg_result = lines.aggregate(initialValue, add         


        
相关标签:
回答
  • 消灭零回复
提交回复
热议问题