问题
Let's assume the following data structur with three numpy arrays (id, parent_id) (parent_id of the root element is -1):
import numpy as np
class MyStructure(object):
def __init__(self):
"""
Default structure for now:
1
/ \
2 3
/ \
4 5
"""
self.ids = np.array([1,2,3,4,5])
self.parent_ids = np.array([-1, 1, 1, 3, 3])
def id_successors(self, idOfInterest):
"""
Return logical index.
"""
return self.parent_ids == idOfInterest
def subtree(self, newRootElement):
"""
Return logical index pointing to elements of the subtree.
"""
init_vector = np.zeros(len(self.ids), bool)
init_vector[np.where(self.ids==newRootElement)[0]] = 1
if sum(self.id_successors(newRootElement))==0:
return init_vector
else:
subtree_vec = init_vector
for sucs in self.ids[self.id_successors(newRootElement)==1]:
subtree_vec += self.subtree(sucs)
return subtree_vec
This get's really slow for many ids (>1000). Is there a faster way to implement that?
回答1:
Have you tried to use psyco module if you are using Python 2.6? It can sometimes do dramatic speed up of code.
Have you considered recursive data structure: list?
Your example is also as standard list:
[1, 2, [3, [4],[5]]]
or
[1, [2, None, None], [3, [4, None, None],[5, None, None]]]
By my pretty printer:
[1,
[2, None, None],
[3,
[4, None, None],
[5, None, None]]]
Subtrees are ready there, cost you some time inserting values to right tree. Also worth while to check if heapq module fits your needs.
Also Guido himself gives some insight on traversing and trees in http://python.org/doc/essays/graphs.html, maybe you are aware of it.
Here is some advanced looking tree stuff, actually proposed for Python as basic list type replacement, but rejected in that function. Blist module
回答2:
I think it's not the recursion as such that's hurting you, but the multitude of very wide operations (over all elements) for every step. Consider:
init_vector[np.where(self.ids==newRootElement)[0]] = 1
That runs a scan through all elements, calculates the index of every matching element, then uses only the index of the first one. This particular operation is available as the method index for lists, tuples, and arrays - and faster there. If IDs are unique, init_vector is simply ids==newRootElement anyway.
if sum(self.id_successors(newRootElement))==0:
Again a linear scan of every element, then a reduction on the whole array, just to check if any matches are there. Use any for this type of operation, but once again we don't even need to do the check on all elements - "if newRootElement not in self.parent_ids" does the job, but it's not necessary as it's perfectly valid to do a for loop over an empty list.
Finally there's the last loop:
for sucs in self.ids[self.id_successors(newRootElement)==1]:
This time, an id_successors call is repeated, and then the result is compared to 1 needlessly. Only after that comes the recursion, making sure all the above operations are repeated (for different newRootElement) for each branch.
The whole code is a reversed traversal of a unidirectional tree. We have parents and need children. If we're to do wide operations such as numpy is designed for, we'd best make them count - and thus the only operation we care about is building a list of children per parent. That's not very hard to do with one iteration:
import collections
children=collections.defaultdict(list)
for i,p in zip(ids,parent_ids):
children[p].append(i)
def subtree(i):
return i, map(subtree, children[i])
The exact structure you need will depend on more factors, such as how often the tree changes, how large it is, how much it branches, and how large and many subtrees you need to request. The dictionary+list structure above isn't terribly memory efficient, for instance. Your example is also sorted, which could make the operation even easier.
回答3:
In theory, every algorithm can be written iteratively as well as recursively. But this is a fallacy (like Turing-completeness). In practice, walking an arbitrarily-nested tree via iteration is generally not feasible. I doubt there is much to optimize (at least you're modifying subtree_vec in-place). Doing x on thousands of elements is inherently damn expensive, no matter whether you do it iteratively or recursively. At most there are a few micro-optimizations possible on the concrete implementation, which will at most yield <5% improvement. Best bet would be caching/memoization, if you need the same data several times. Maybe someone has a fancy O(log n) algorithm for your specific tree structure up their sleeve, I don't even know if one is possible (I'd assume no, but tree manipulation isn't my staff of life).
回答4:
This is my answer (written without access to your class, so the interface is slightly different, but I'm attaching it as is so that you can test if it is fast enough):
=======================file graph_array.py==========================
import collections
import numpy
def find_subtree(pids, subtree_id):
N = len(pids)
assert 1 <= subtree_id <= N
subtreeids = numpy.zeros(pids.shape, dtype=bool)
todo = collections.deque([subtree_id])
iter = 0
while todo:
id = todo.popleft()
assert 1 <= id <= N
subtreeids[id - 1] = True
sons = (pids == id).nonzero()[0] + 1
#print 'id={0} sons={1} todo={2}'.format(id, sons, todo)
todo.extend(sons)
iter = iter+1
if iter>N:
raise ValueError()
return subtreeids
=======================file graph_array_test.py==========================
import numpy
from graph_array import find_subtree
def _random_graph(n, maxsons):
import random
pids = numpy.zeros(n, dtype=int)
sons = numpy.zeros(n, dtype=int)
available = []
for id in xrange(1, n+1):
if available:
pid = random.choice(available)
sons[pid - 1] += 1
if sons[pid - 1] == maxsons:
available.remove(pid)
else:
pid = -1
pids[id - 1] = pid
available.append(id)
assert sons.max() <= maxsons
return pids
def verify_subtree(pids, subtree_id, subtree):
ids = set(subtree.nonzero()[0] + 1)
sons = set(ids) - set([subtree_id])
fathers = set(pids[id - 1] for id in sons)
leafs = set(id for id in ids if not (pids == id).any())
rest = set(xrange(1, pids.size+1)) - fathers - leafs
assert fathers & leafs == set()
assert fathers | leafs == ids
assert ids & rest == set()
def test_linear_graph_gen(n, genfunc, maxsons):
assert maxsons == 1
pids = genfunc(n, maxsons)
last = -1
seen = set()
for _ in xrange(pids.size):
id = int((pids == last).nonzero()[0]) + 1
assert id not in seen
seen.add(id)
last = id
assert seen == set(xrange(1, pids.size + 1))
def test_case1():
"""
1
/ \
2 4
/
3
"""
pids = numpy.array([-1, 1, 2, 1])
subtrees = {1: [True, True, True, True],
2: [False, True, True, False],
3: [False, False, True, False],
4: [False, False, False, True]}
for id in xrange(1, 5):
sub = find_subtree(pids, id)
assert (sub == numpy.array(subtrees[id])).all()
verify_subtree(pids, id, sub)
def test_random(n, genfunc, maxsons):
pids = genfunc(n, maxsons)
for subtree_id in numpy.arange(1, n+1):
subtree = find_subtree(pids, subtree_id)
verify_subtree(pids, subtree_id, subtree)
def test_timing(n, genfunc, maxsons):
import time
pids = genfunc(n, maxsons)
t = time.time()
for subtree_id in numpy.arange(1, n+1):
subtree = find_subtree(pids, subtree_id)
t = time.time() - t
print 't={0}s = {1:.2}ms/subtree = {2:.5}ms/subtree/node '.format(
t, t / n * 1000, t / n**2 * 1000),
def pytest_generate_tests(metafunc):
if 'case' in metafunc.function.__name__:
return
ns = [1, 2, 3, 4, 5, 10, 20, 50, 100, 1000]
if 'timing' in metafunc.function.__name__:
ns += [10000, 100000, 1000000]
pass
for n in ns:
func = _random_graph
for maxsons in sorted(set([1, 2, 3, 4, 5, 10, (n+1)//2, n])):
metafunc.addcall(
funcargs=dict(n=n, genfunc=func, maxsons=maxsons),
id='n={0} {1.__name__}/{2}'.format(n, func, maxsons))
if 'linear' in metafunc.function.__name__:
break
===================py.test --tb=short -v -s test_graph_array.py============
... test_graph_array.py:72: test_timing[n=1000 _random_graph/1] t=13.4850590229s = 13.0ms/subtree = 0.013485ms/subtree/node PASS test_graph_array.py:72: test_timing[n=1000 _random_graph/2] t=0.318281888962s = 0.32ms/subtree = 0.00031828ms/subtree/node PASS test_graph_array.py:72: test_timing[n=1000 _random_graph/3] t=0.265519142151s = 0.27ms/subtree = 0.00026552ms/subtree/node PASS test_graph_array.py:72: test_timing[n=1000 _random_graph/4] t=0.24147105217s = 0.24ms/subtree = 0.00024147ms/subtree/node PASS test_graph_array.py:72: test_timing[n=1000 _random_graph/5] t=0.211434841156s = 0.21ms/subtree = 0.00021143ms/subtree/node PASS test_graph_array.py:72: test_timing[n=1000 _random_graph/10] t=0.178458213806s = 0.18ms/subtree = 0.00017846ms/subtree/node PASS test_graph_array.py:72: test_timing[n=1000 _random_graph/500] t=0.209936141968s = 0.21ms/subtree = 0.00020994ms/subtree/node PASS test_graph_array.py:72: test_timing[n=1000 _random_graph/1000] t=0.245707988739s = 0.25ms/subtree = 0.00024571ms/subtree/node PASS ...
Here every subtree of every tree is taken, and the interesting value is the mean time to extract a tree: ~0.2ms per subtree, except for strictly linear trees. I'm not sure what is happening here.
来源:https://stackoverflow.com/questions/3350413/is-there-a-faster-way-to-get-subtrees-from-tree-like-structures-in-python-than-t