While running my spark program in jupyter notebook I got the error \"Job cancelled because SparkContext was shut down\".I am using spark without hadoop.The same prog
This problem is solved now.I have to create a checkpoint directory as number of iterations was more than 20 for training.
The code for creating checkpoint directory is:
SparkContext.setCheckpointDir("path to directory")