Task not serializable: java.io.NotSerializableException when calling function outside closure only on classes not objects

后端 未结 9 1534
悲&欢浪女
悲&欢浪女 2020-11-22 05:29

Getting strange behavior when calling function outside of a closure:

  • when function is in a object everything is working
  • when function is in a class ge
相关标签:
9条回答
  • 2020-11-22 05:45

    I faced similar issue, and what I understand from Grega's answer is

    object NOTworking extends App {
     new testing().doIT
    }
    //adding extends Serializable wont help
    class testing {
    
    val list = List(1,2,3)
    
    val rddList = Spark.ctx.parallelize(list)
    
    def doIT =  {
      //again calling the fucntion someFunc 
      val after = rddList.map(someFunc(_))
      //this will crash (spark lazy)
      after.collect().map(println(_))
    }
    
    def someFunc(a:Int) = a+1
    
    }
    

    your doIT method is trying to serialize someFunc(_) method, but as method are not serializable, it tries to serialize class testing which is again not serializable.

    So make your code work, you should define someFunc inside doIT method. For example:

    def doIT =  {
     def someFunc(a:Int) = a+1
      //function definition
     }
     val after = rddList.map(someFunc(_))
     after.collect().map(println(_))
    }
    

    And if there are multiple functions coming into picture, then all those functions should be available to the parent context.

    0 讨论(0)
  • 2020-11-22 05:47

    FYI in Spark 2.4 a lot of you will probably encounter this issue. Kryo serialization has gotten better but in many cases you cannot use spark.kryo.unsafe=true or the naive kryo serializer.

    For a quick fix try changing the following in your Spark configuration

    spark.kryo.unsafe="false"
    

    OR

    spark.serializer="org.apache.spark.serializer.JavaSerializer"
    

    I modify custom RDD transformations that I encounter or personally write by using explicit broadcast variables and utilizing the new inbuilt twitter-chill api, converting them from rdd.map(row => to rdd.mapPartitions(partition => { functions.

    Example

    Old (not-great) Way

    val sampleMap = Map("index1" -> 1234, "index2" -> 2345)
    val outputRDD = rdd.map(row => {
        val value = sampleMap.get(row._1)
        value
    })
    

    Alternative (better) Way

    import com.twitter.chill.MeatLocker
    val sampleMap = Map("index1" -> 1234, "index2" -> 2345)
    val brdSerSampleMap = spark.sparkContext.broadcast(MeatLocker(sampleMap))
    
    rdd.mapPartitions(partition => {
        val deSerSampleMap = brdSerSampleMap.value.get
        partition.map(row => {
            val value = sampleMap.get(row._1)
            value
        }).toIterator
    })
    

    This new way will only call the broadcast variable once per partition which is better. You will still need to use Java Serialization if you do not register classes.

    0 讨论(0)
  • 2020-11-22 05:49

    Complete talk fully explaining the problem, which proposes a great paradigm shifting way to avoid these serialization problems: https://github.com/samthebest/dump/blob/master/sams-scala-tutorial/serialization-exceptions-and-memory-leaks-no-ws.md

    The top voted answer is basically suggesting throwing away an entire language feature - that is no longer using methods and only using functions. Indeed in functional programming methods in classes should be avoided, but turning them into functions isn't solving the design issue here (see above link).

    As a quick fix in this particular situation you could just use the @transient annotation to tell it not to try to serialise the offending value (here, Spark.ctx is a custom class not Spark's one following OP's naming):

    @transient
    val rddList = Spark.ctx.parallelize(list)
    

    You can also restructure code so that rddList lives somewhere else, but that is also nasty.

    The Future is Probably Spores

    In future Scala will include these things called "spores" that should allow us to fine grain control what does and does not exactly get pulled in by a closure. Furthermore this should turn all mistakes of accidentally pulling in non-serializable types (or any unwanted values) into compile errors rather than now which is horrible runtime exceptions / memory leaks.

    http://docs.scala-lang.org/sips/pending/spores.html

    A tip on Kryo serialization

    When using kyro, make it so that registration is necessary, this will mean you get errors instead of memory leaks:

    "Finally, I know that kryo has kryo.setRegistrationOptional(true) but I am having a very difficult time trying to figure out how to use it. When this option is turned on, kryo still seems to throw exceptions if I haven't registered classes."

    Strategy for registering classes with kryo

    Of course this only gives you type-level control not value-level control.

    ... more ideas to come.

    0 讨论(0)
  • 2020-11-22 05:49

    I solved this problem using a different approach. You simply need to serialize the objects before passing through the closure, and de-serialize afterwards. This approach just works, even if your classes aren't Serializable, because it uses Kryo behind the scenes. All you need is some curry. ;)

    Here's an example of how I did it:

    def genMapper(kryoWrapper: KryoSerializationWrapper[(Foo => Bar)])
                   (foo: Foo) : Bar = {
        kryoWrapper.value.apply(foo)
    }
    val mapper = genMapper(KryoSerializationWrapper(new Blah(abc))) _
    rdd.flatMap(mapper).collectAsMap()
    
    object Blah(abc: ABC) extends (Foo => Bar) {
        def apply(foo: Foo) : Bar = { //This is the real function }
    }
    

    Feel free to make Blah as complicated as you want, class, companion object, nested classes, references to multiple 3rd party libs.

    KryoSerializationWrapper refers to: https://github.com/amplab/shark/blob/master/src/main/scala/shark/execution/serialization/KryoSerializationWrapper.scala

    0 讨论(0)
  • 2020-11-22 05:50

    RDDs extend the Serialisable interface, so this is not what's causing your task to fail. Now this doesn't mean that you can serialise an RDD with Spark and avoid NotSerializableException

    Spark is a distributed computing engine and its main abstraction is a resilient distributed dataset (RDD), which can be viewed as a distributed collection. Basically, RDD's elements are partitioned across the nodes of the cluster, but Spark abstracts this away from the user, letting the user interact with the RDD (collection) as if it were a local one.

    Not to get into too many details, but when you run different transformations on a RDD (map, flatMap, filter and others), your transformation code (closure) is:

    1. serialized on the driver node,
    2. shipped to the appropriate nodes in the cluster,
    3. deserialized,
    4. and finally executed on the nodes

    You can of course run this locally (as in your example), but all those phases (apart from shipping over network) still occur. [This lets you catch any bugs even before deploying to production]

    What happens in your second case is that you are calling a method, defined in class testing from inside the map function. Spark sees that and since methods cannot be serialized on their own, Spark tries to serialize the whole testing class, so that the code will still work when executed in another JVM. You have two possibilities:

    Either you make class testing serializable, so the whole class can be serialized by Spark:

    import org.apache.spark.{SparkContext,SparkConf}
    
    object Spark {
      val ctx = new SparkContext(new SparkConf().setAppName("test").setMaster("local[*]"))
    }
    
    object NOTworking extends App {
      new Test().doIT
    }
    
    class Test extends java.io.Serializable {
      val rddList = Spark.ctx.parallelize(List(1,2,3))
    
      def doIT() =  {
        val after = rddList.map(someFunc)
        after.collect().foreach(println)
      }
    
      def someFunc(a: Int) = a + 1
    }
    

    or you make someFunc function instead of a method (functions are objects in Scala), so that Spark will be able to serialize it:

    import org.apache.spark.{SparkContext,SparkConf}
    
    object Spark {
      val ctx = new SparkContext(new SparkConf().setAppName("test").setMaster("local[*]"))
    }
    
    object NOTworking extends App {
      new Test().doIT
    }
    
    class Test {
      val rddList = Spark.ctx.parallelize(List(1,2,3))
    
      def doIT() =  {
        val after = rddList.map(someFunc)
        after.collect().foreach(println)
      }
    
      val someFunc = (a: Int) => a + 1
    }
    

    Similar, but not the same problem with class serialization can be of interest to you and you can read on it in this Spark Summit 2013 presentation.

    As a side note, you can rewrite rddList.map(someFunc(_)) to rddList.map(someFunc), they are exactly the same. Usually, the second is preferred as it's less verbose and cleaner to read.

    EDIT (2015-03-15): SPARK-5307 introduced SerializationDebugger and Spark 1.3.0 is the first version to use it. It adds serialization path to a NotSerializableException. When a NotSerializableException is encountered, the debugger visits the object graph to find the path towards the object that cannot be serialized, and constructs information to help user to find the object.

    In OP's case, this is what gets printed to stdout:

    Serialization stack:
        - object not serializable (class: testing, value: testing@2dfe2f00)
        - field (class: testing$$anonfun$1, name: $outer, type: class testing)
        - object (class testing$$anonfun$1, <function1>)
    
    0 讨论(0)
  • 2020-11-22 05:52

    Grega's answer is great in explaining why the original code does not work and two ways to fix the issue. However, this solution is not very flexible; consider the case where your closure includes a method call on a non-Serializable class that you have no control over. You can neither add the Serializable tag to this class nor change the underlying implementation to change the method into a function.

    Nilesh presents a great workaround for this, but the solution can be made both more concise and general:

    def genMapper[A, B](f: A => B): A => B = {
      val locker = com.twitter.chill.MeatLocker(f)
      x => locker.get.apply(x)
    }
    

    This function-serializer can then be used to automatically wrap closures and method calls:

    rdd map genMapper(someFunc)
    

    This technique also has the benefit of not requiring the additional Shark dependencies in order to access KryoSerializationWrapper, since Twitter's Chill is already pulled in by core Spark

    0 讨论(0)
提交回复
热议问题