I know there are plenty of questions on SO about out of memory errors on Spark but I haven\'t found a solution to mine.
I have a simple workflow:
As mentioned above, "cache" is not action, check RDD Persistence:
You can mark an RDD to be persisted using the persist() or cache() methods on it. The first time it is computed in an action, it will be kept in memory on the nodes.
But "collect" is an action, and all computations (including "cache") will be started when "collect" is called.
You run application in standalone mode, it means, initial data loading and all computations will be performed in the same memory.
Data downloading and other computations are used most memory, not "collect".
You can check it by replacing "collect" with "count".
When you say collect on the dataframe there are 2 things happening,
Answer:
If you are looking to just load the data into memory of the exceutors, count() is also an action that will load the data into the executor's memory which can be used by other processes.
If you want to extract the data, then try this along with other properties when puling the data "--conf spark.driver.maxResultSize=10g".