[graph-tool] Performance question
Tiago de Paula Peixoto
tiago at skewed.de
Fri Mar 21 10:56:25 CET 2014
On 03/21/2014 09:09 AM, Hang Mang wrote:
> I have a graph with 1034 vertices and 53498 edges. I'm manually computing the preferential attachement index for the vertices, and other indices. I'm aware that graph-tool has that implemented but I'm doing it for personal stuff. However I noticed that my computations are very slow. It took 2.7 minutes to compute that for the mentioned graph. I'm not sure if it's my algorithm that is slow or the something is wrong with graph-tool. I would be very thankful if someone could have a little look into my code.
> def pa(graph):
> Calculates Preferential Attachment index.
> Returns S the similarity matrix.
> A = gts.adjacency(graph)
> S = np.zeros(A.shape)
> for i in xrange(S.shape):
> for j in xrange(S.shape):
> i_degree = graph.vertex(i).out_degree()
> j_degree = graph.vertex(j).out_degree()
> factor = i_degree * j_degree
> S[i,j] = factor
This is not the best way to do this. You have only 53498 nonzero entries
in your matrix, which has 1034^2 = 1069156 entries in total. In the loop
above you spend the vast majority of the time setting the zero entries
of the matrix... The best way is to iterate through the edges. Note that
you do not need the adjacency matrix at all.
N = g.num_vertices()
S = np.zeros((N, N))
d = g.degree_property_map()
for e in g.edges():
s, t = tuple(e)
S[int(s), int(t)] = d[s] * d[t]
Using graph-tool will not magically make things faster; you still need
to use the best algorithms. And furthermore, if your computationally
intensive loops are implemented in pure Python, graph-tool will provide
no advantage at all when compared to networkx, or other pure python
Tiago de Paula Peixoto <tiago at skewed.de>
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 901 bytes
Desc: OpenPGP digital signature
More information about the graph-tool