Asynchronous multiprocessing with a worker pool in Python: how to keep going after timeout?

前端 未结 3 1469
悲哀的现实
悲哀的现实 2020-12-10 15:23

I would like to run a number of jobs using a pool of processes and apply a given timeout after which a job should be killed and replaced by another working on the next task.

3条回答
  •  囚心锁ツ
    2020-12-10 16:17

    The pebble Pool module has been built for solving these types of issue. It supports timeout on given tasks allowing to detect them and easily recover.

    from pebble import ProcessPool
    from concurrent.futures import TimeoutError
    
    with ProcessPool() as pool:
        future = pool.schedule(function, args=[1,2], timeout=5)
    
    try:
        result = future.result()
    except TimeoutError:
        print "Function took longer than %d seconds" % error.args[1]
    

    For your specific example:

    from pebble import ProcessPool
    from concurrent.futures import TimeoutError
    
    results = []
    
    with ProcessPool(max_workers=4) as pool:
        future = pool.map(Check, range(10), timeout=5)
    
        iterator = future.result()
    
        # iterate over all results, if a computation timed out
        # print it and continue to the next result
        while True:
            try:
                result = next(iterator)
                results.append(result)
            except StopIteration:
                break  
            except TimeoutError as error:
                print "function took longer than %d seconds" % error.args[1] 
    
    print results
    

提交回复
热议问题