I just want to ask on the specifics how to successfully use checkpointInterval in Spark. And what do you mean by this comment in the code for ALS: https://github.com/apache/
How can we set checkPoint directory? Can we use any hdfs-compatible directory for this?
You can use SparkContext.setCheckpointDir
. As far as I remember in local mode both local and DFS paths work just fine, but on the cluster the directory must be a HDFS path.
Is using setCheckpointInterval the correct way to implement checkpointing in ALS to avoid Stack Overflow errors?
It should help. See SPARK-1006
PS: It seems that in order to actually perform check-point in ALS, the checkpointDir
must be set or check-pointing won't be effective [Ref. here.]