问题
I am considering that the Stress of a vertex i is the number of shortest paths between all pairs of vertices that i belongs to.
I am trying to calculate it using Networkx, I've made in three ways so far. The readable, dirty, and dirtiest but none of them is fast. Actually, I would like it to be faster than the betweenness (source) present on Networkx. Is there a better way to calculate that? Thanks in advance for any suggestion, answer or comment. Following see what I did so far:
Ps.: Here is a pastie with the code ready to go if you want give it a try, thanks again.
Here is the common part on all versions:
import networkx as nx
from collections import defaultdict
Dirtiest, brace yourselves:
def stress_centrality_dirtiest(g):
stress = defaultdict(int)
for a in nx.nodes_iter(g):
for b in nx.nodes_iter(g):
if a==b:
continue
# pred = nx.predecessor(G,b) # for unweighted graphs
pred, distance = nx.dijkstra_predecessor_and_distance(g,b) # for weighted graphs
if not pred.has_key(a):
return []
path = [[a,0]]
path_length = 1
index = 0
while index >= 0:
n,i = path[index]
if n == b:
for vertex in map(lambda x:x[0], path[:index+1])[1:-1]:
stress[vertex] += 1
if len(pred[n]) > i:
index += 1
if index == path_length:
path.append([pred[n][i],0])
path_length += 1
else:
path[index] = [pred[n][i],0]
else:
index -= 1
if index >= 0:
path[index][4] += 1
return stress
Dirty
def stress_centrality_dirty(g):
stress = defaultdict(int)
paths = nx.all_pairs_dijkstra_path(g)
for item in paths.values():
for element in item.values():
if len(element) > 2:
for vertex in element[1:-1]:
stress[vertex] += 1
return stress
Readable
def stress_centrality_readable(g):
stress = defaultdict(int)
paths = nx.all_pairs_dijkstra_path(g)
for source in nx.nodes_iter(g):
for end in nx.nodes_iter(g):
if source == end:
continue
path = paths[source][end]
if len(path) > 2: # path must contains at least 3 vertices source - another node - end
for vertex in path[1:-1]: # when counting the number of occurrencies, exclude source and end vertices
stress[vertex] += 1
return stress
回答1:
The betweenness code you pointed to in NetworkX does almost what you want and can be adjusted easily.
In the betweenness function if you call the following (instead of _accumulate_basic) during the "accumulate" stage it should calculate the stress centrality (untested)
def _accumulate_stress(betweenness,S,P,sigma,s):
delta = dict.fromkeys(S,0)
while S:
w = S.pop()
for v in P[w]:
delta[v] += (1.0+delta[w])
if w != s:
betweenness[w] += sigma[w]*delta[w]
return betweenness
See the paper Ulrik Brandes: On Variants of Shortest-Path Betweenness Centrality and their Generic Computation. Social Networks 30(2):136-145, 2008. http://www.inf.uni-konstanz.de/algo/publications/b-vspbc-08.pdf
The stress centrality algorithm is Algorithm 12.
回答2:
Based on the answer I have been given here, I tried to do exactly the same thing.
My attempt revolved around the use of the nx.all_shortest_paths(G,source,target)
function, which produces a generator:
counts={}
for n in G.nodes(): counts[n]=0
for n in G.nodes():
for j in G.nodes():
if (n!=j):
gener=nx.all_shortest_paths(G,source=n,target=j) #A generator
print('From node '+str(n)+' to '+str(j))
for p in gener:
print(p)
for v in p: counts[v]+=1
print('------')
I have tested this code with a NxN
grid network of 100
nodes and it took me approximately 168 seconds to get the results. Now I am aware this is not the best answer as this code is not optimized, but I thought you might have wanted to know about it. Hopefully I can get some directions on how to improve my code.
来源:https://stackoverflow.com/questions/17092415/faster-way-to-calculate-the-number-of-shortest-paths-a-vertex-belongs-to-using-n