问题
I have some very simple python code that runs a bunch of inputs through various processes via ThreadPoolExecutor(). Now, sometimes one or more of the threads dies quietly. It is actually great that the rest of the threads continue on and the code completes, but I would like to put together some type of summary that tells me which, if any, of the threads have died.
I've found several examples where folks want the whole thing to shut down, but haven't seen anything yet where the process continues on and the threads that have hit errors are just reported on after the fact.
Any/all thoughts greatly appreciated!
Thanks!
import concurrent.futures as cf
with cf.ThreadPoolExecutor() as executor:
executor.map(process_a, process_a_inputs)
executor.map(process_b, process_b_inputs)
回答1:
Executor.map
does not support gathering more than one exception. However, its code can easily be adapted to return the arguments on which a failure occurred.
def attempt(executor: 'Executor', fn: 'Callable', *iterables):
"""Attempt to ``map(fn, *iterables)`` and return the args that caused a failure"""
future_args = [(self.submit(fn, *args), args) for args in zip(*iterables)]
def failure_iterator():
future_args.reverse()
while future_args:
future, args = future_args.pop()
try:
future.result()
except BaseException:
del future
yield args
return failure_iterator()
This can be used to concurrently "map" arguments to functions, and later retrieve any failures.
import concurrent.futures as cf
with cf.ThreadPoolExecutor() as executor:
a_failures = attempt(executor, process_a, process_a_inputs)
b_failures = attempt(executor, process_b, process_b_inputs)
for args in a_tries:
print(f'failed to map {args} onto a')
for args in b_tries:
print(f'failed to map {args} onto b')
来源:https://stackoverflow.com/questions/61122582/how-can-i-catch-when-a-thread-dies-in-threadpoolexecutor