What is the efficient way to update value inside Spark's RDD?

前端 未结 3 892
别那么骄傲
别那么骄傲 2020-12-29 12:35

I\'m writing a graph-related program in Scala with Spark. The dataset have 4 million nodes and 4 million edges(you can treat this as a tree), but f

相关标签:
3条回答
  • 2020-12-29 12:54

    As functional data structures, RDDs are immutable and an operation on an RDD generates a new RDD.

    Immutability of the structure does not necessarily mean full replication. Persistant data structures are a common functional pattern where operations on immutable structures yield a new structure but previous versions are maintained and often reused.

    GraphX (a 'module' on top of Spark) is a graph API on top of Spark that uses such concept: From the docs:

    Changes to the values or structure of the graph are accomplished by producing a new graph with the desired changes. Note that substantial parts of the original graph (i.e., unaffected structure, attributes, and indicies) are reused in the new graph reducing the cost of this inherently functional data-structure.

    It might be a solution for the problem at hand: http://spark.apache.org/docs/1.0.0/graphx-programming-guide.html

    0 讨论(0)
  • 2020-12-29 13:00

    The MapReduce programming model (and FP) doesn't really support updates of single values. Rather one is supposed to define a sequence of transformations.

    Now when you have interdependent values, i.e. you cannot perform your transformation with a simple map but need to aggregate multiple values and update based on that value, then what you need to do is think of a way of grouping those values together then transforming each group - or define a monoidal operation so that the operation can be distributed and chopped up into substeps.

    Group By Approach

    Now I'll try to be a little more specific for your particular case. You say you have subtrees, is it possible to first map each node to an key that indicates the corresponding subtree? If so you could do something like this:

    nodes.map(n => (getSubTreeKey(n), n)).grouByKey().map ...

    Monoid

    (strictly speaking you want a commutative monoid) Best you read http://en.wikipedia.org/wiki/Monoid#Commutative_monoid

    For example + is a monoidal operation because when one wishes to compute the sum of, say, an RDD of Ints then the underlying framework can chop up the data into chunks, perform the sum on each chunk, then sum up the resulting sums (possibly in more than just 2 steps too). If you can find a monoid that will ultimately produce the same results you require from single updates, then you have a way to distribute your processing. E.g.

    nodes.reduce(_ myMonoid _)

    0 讨论(0)
  • 2020-12-29 13:14

    An RDD is a distributed data set, a partition is the unit for RDD storage, and the unit to process and RDD is an element.

    For example, you read a large file from HDFS as an RDD, then the element of this RDD is String(lines in that file), and spark stores this RDD across the cluster by partition. For you, as a spark user, you only need to care about how to deal with the lines of that files, just like you are writing a normal program, and you read a file from local file system line by line. That's the power of spark:)

    Anyway, you have no idea which elements will be stored in a certain partition, so it doesn't make sense to update a certain partition.

    0 讨论(0)
提交回复
热议问题