Apache Spark : When not to use mapPartition and foreachPartition?

前端 未结 1 547
无人共我
无人共我 2020-12-07 06:00

I know that when we want to initialize some resource for a group of RDDs instead of individual RDD elements we should ideally use the mapPartition and foreachPartition. For

相关标签:
1条回答
  • 2020-12-07 06:05

    When you write Spark jobs that uses either mapPartition or foreachPartition you can just modify the partition data itself or just iterate through partition data respectively. The anonymous function passed as parameter will be executed on the executors thus there is not a viable way to execute a code which invokes all the nodes e.g: df.reduceByKey from one particular executor. This code should be executed only from the driver node. Thus only from the driver code you can access dataframes, datasets and spark session.

    Please find here a detailed discussion over this issue and possible solutions

    0 讨论(0)
提交回复
热议问题