Facing the error due to iteration in engine for data due to which stackoverflow
exception is coming as follows:
ERROR org.apache.spark.executor.Executor [Executor task launch worker-0] - Exception in task 0.0 in stage 30.0 (TID 76)
java.lang.StackOverflowError
at java.io.ObjectInputStream$BlockDataInputStream.readByte(ObjectInputStream.java:2774)
at java.io.ObjectInputStream.readHandle(ObjectInputStream.java:1450)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1512)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
Got a solution for the error:
1.Simply reduce the numIterations parameter for algorithm in engine.json file in your prediction engine.
or If this does'nt work go with another solution below.
2.Add checkpointing, which prevents the recursion used by the codebase from creating an overflow. First, create a new directory to store the checkpoints. Then, have your SparkContext use that directory for checkpointing. Here is the example in Python:
sc.setCheckpointDir('checkpoint/') You may also need to add checkpointing to the ALS as well, but I haven't been able to determine whether that makes a difference. To add a checkpoint there (probably not necessary), just do:
ALS.checkpointInterval = 2
来源:https://stackoverflow.com/questions/34133172/predictionio-engine