I have a problem with running spark application on standalone cluster. (I use spark 1.1.0 version). I succesfully run master server by command:
bash start-master
For the benefit of others running into this problem:
I faced an identical issue due to a mismatch between the spark connector and spark version being used. Spark was 1.3.1 and the connector was 1.3.0, an identical error message appeared:
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 2 in stage 0.0 failed 4 times, most recent failure: Lost
task 2.3 in stage 0.0
Updating the dependancy in SBT solved the problem.