Python\'s sys
module provides a function setrecursionlimit
that lets you change Python\'s maximum recursion limit. The docs say:
You shouldn't overuse recursive calls in CPython. It has not tail optimization, the function calls use a lot of memory and processing time. Those limits might not apply to other implementations, it's not in the blueprints.
In CPython, recursion is fine for traversing data structures (where a limit of 1000 should be enough for everybody) but not for algorithms. If I were to implement, say, graph related algorithms and hit the recursion limit, I would either implement my own stack and use iterations, or look for libraries implemented in C/C++/whatever before raising the limit by hand.
On Windows (at least), sys.setrecursionlimit
isn't the full story. The hard limit is on a per-thread basis and you need to call threading.stack_size
and create a new thread once you reach a certain limit. (I think 1MB, but not sure) I've used this approach to increase it to a 64MB stack.
import sys
import threading
threading.stack_size(67108864) # 64MB stack
sys.setrecursionlimit(2 ** 20) # something real big
# you actually hit the 64MB limit first
# going by other answers, could just use 2**32-1
# only new threads get the redefined stack size
thread = threading.Thread(target=main)
thread.start()
I haven't tried to see what limits there might be on threading.stack_size
, but feel free to try... that's where you need to look.
In summary, sys.setrecursionlimit
is just a limit enforced by the interpreter itself. threading.stack_size
lets you manipulate the actual limit imposed by the OS. If you hit the latter limit first, Python will just crash completely.