Out of memory error when collecting data out of Spark cluster

前端 未结 2 1748
臣服心动
臣服心动 2021-02-05 11:25

I know there are plenty of questions on SO about out of memory errors on Spark but I haven\'t found a solution to mine.

I have a simple workflow:

  1. read in O
相关标签:
2条回答
  • 2021-02-05 11:35

    As mentioned above, "cache" is not action, check RDD Persistence:

    You can mark an RDD to be persisted using the persist() or cache() methods on it. The first time it is computed in an action, it will be kept in memory on the nodes. 
    

    But "collect" is an action, and all computations (including "cache") will be started when "collect" is called.

    You run application in standalone mode, it means, initial data loading and all computations will be performed in the same memory.

    Data downloading and other computations are used most memory, not "collect".

    You can check it by replacing "collect" with "count".

    0 讨论(0)
  • 2021-02-05 11:52

    When you say collect on the dataframe there are 2 things happening,

    1. First is all the data has to be written to the output on the driver.
    2. The driver has to collect the data from all nodes and keep in its memory.

    Answer:

    If you are looking to just load the data into memory of the exceutors, count() is also an action that will load the data into the executor's memory which can be used by other processes.

    If you want to extract the data, then try this along with other properties when puling the data "--conf spark.driver.maxResultSize=10g".

    0 讨论(0)
提交回复
热议问题