What kind of variable select for incrementing node labels in a community detection algorithm

血红的双手。 提交于 2019-12-11 17:15:04

问题


i am working on a community detection algorithm that uses the concept of propagating label to nodes. i have problem in selecting the true type for the Label_counter variable.

we have an algorithm with name LPA(label propagation algorithm) which propagates labels to nodes through iterations. think labels as node property. the initial label for each node is the node id, and in iterations nodes update their new label based on the most frequent label among its neighbors. the algorithm i am working on is something like LPA. at first every node has initial label equal to 0 and then nodes get new labels. as nodes update and get new labels, based on some conditions the Label_counter should be incremented by one to use this value as label for other nodes . for example label=1 or label = 2 and so on. for example we have zachary karate club dataset that it has 34 nodes and the dataset has 2 communities. the initial state is like this:

 (1,0)
 (2,0)
   .
   .
   .
 (34,0)

first number is node Id and second one is label. as nodes get new label, the Label_counter increments and other nodes in next iterations get new label and again Label_counter increments.

 (1,1)
 (2,1)
 (3,1)
   .
   .
   .
 (33,3)
 (34,3)

nodes with same label, belong to same community.

the problem that i have is: because nodes in RDD and variables are distributed across the machines(each machine has a copy of variables) and executors dont have connection with each other, if an executor updates the Label_counter, other executors wont be informed of new value of Label_counter and maybe nodes will get wrong labels, IS it true to use Accumulator as label counter in this case, because Accumulators are shared variables across machines, or there is other ways for handling this problem???


回答1:


In spark it is always complicated to compute index like values because they depend on things that are not in all the partitions. I can propose the following idea.

  1. Compute the number of time the condition is met per partition
  2. Compute the cumulated increment per partition so that we know the initial increment of each partition.
  3. Increment the values of the partition based on that initial increment

Here is what the code could look like this. Let me start by setting up a few things.

// Let's define some condition
def condition(node : Long) = node % 10 == 1

// step 0, generate the data
val rdd = spark.range(34)
    .select('id+1).repartition(10).rdd
    .map(r => (r.getAs[Long](0), 0))
    .sortBy(_._1).cache()
rdd.collect
Array[(Long, Int)] = Array((1,0), (2,0), (3,0), (4,0), (5,0), (6,0), (7,0), (8,0),
 (9,0), (10,0), (11,0), (12,0), (13,0), (14,0), (15,0), (16,0), (17,0), (18,0),
 (19,0), (20,0), (21,0), (22,0), (23,0), (24,0), (25,0), (26,0), (27,0), (28,0),
 (29,0), (30,0), (31,0), (32,0), (33,0), (34,0))

Then the core of the solution:

// step 1 and 2
val partIncrInit = rdd
    // to each partition, we associate the number of times we need to increment
    .mapPartitionsWithIndex{ case (i,p) =>
        Iterator(i -> p.map(_._1).count(condition))
    }
    .collect.sorted // sort by partition index
    .map(_._2) // we don't need the index anymore
    .scanLeft(0)(_+_) // cumulated sum

// step 3, we increment each partition based on this initial increment.
val result = rdd
    .mapPartitionsWithIndex{ case (i, p) =>
        var incr = 0
        p.map{ case (node, value) =>
            if(condition(node))
                incr+=1
            (node, partIncrInit(i) + value + incr) 
        }
    }
result.collect

Array[(Long, Int)] = Array((1,1), (2,1), (3,1), (4,1), (5,1), (6,1), (7,1), (8,1),
 (9,1), (10,1), (11,2), (12,2), (13,2), (14,2), (15,2), (16,2), (17,2), (18,2),
 (19,2), (20,2), (21,3), (22,3), (23,3), (24,3), (25,3), (26,3), (27,3), (28,3),
 (29,3), (30,3), (31,4), (32,4), (33,4), (34,4))


来源:https://stackoverflow.com/questions/57733691/what-kind-of-variable-select-for-incrementing-node-labels-in-a-community-detecti

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!