PySpark Evaluation

后端 未结 2 1523
自闭症患者
自闭症患者 2020-12-03 16:18

I am trying the following code which adds a number to every row in an RDD and returns a list of RDDs using PySpark.

from pyspark.context import SparkContext         


        
相关标签:
2条回答
  • 2020-12-03 16:31

    This is due to to the fact that lambdas refer to the i via reference! It has nothing to do with spark. See this

    You can try this:

    a =[(lambda y: (lambda x: y + int(x)))(i) for i in range(4)]
    splits = [data.map(a[x]) for x in range(4)]
    

    or in one line

    splits = [
        data.map([(lambda y: (lambda x: y + int(x)))(i) for i in range(4)][x])
        for x in range(4)
    ]
    
    0 讨论(0)
  • 2020-12-03 16:42

    It happens because of Python late binding and is not (Py)Spark specific. i will be looked-up when lambda p : int(p) + i is used, not when it is defined. Typically it means when it is called but in this particular context it is when it is serialized to be send to the workers.

    You can do for example something like this:

    def f(i):
        def _f(x):
            try:
                return int(x) + i
            except:
                pass
        return _f
    
    data = sc.parallelize(["1", "2", "3"])
    splits = [data.map(f(i)) for i in range(4)]
    [rdd.collect() for rdd in splits]
    ## [[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6]]
    
    0 讨论(0)
提交回复
热议问题