How to break time.sleep() in a python concurrent.futures

后端 未结 3 878
渐次进展
渐次进展 2021-02-15 23:29

I am playing around with concurrent.futures.

Currently my future calls time.sleep(secs).

It seems that Future.cancel() does less than I thought.

相关标签:
3条回答
  • 2021-02-16 00:09

    I do not know much about concurrent.futures, but you can use this logic to break the time. Use a loop instead of sleep.time() or wait()

    for i in range(sec):
        sleep(1)
    

    interrupt or break can be used to come out of loop.

    0 讨论(0)
  • 2021-02-16 00:20

    If you submit a function to a ThreadPoolExecutor, the executor will run the function in a thread and store its return value in the Future object. Since the number of concurrent threads is limited, you have the option to cancel the pending execution of a future, but once control in the worker thread has been passed to the callable, there's no way to stop execution.

    Consider this code:

    import concurrent.futures as f
    import time
    
    T = f.ThreadPoolExecutor(1) # Run at most one function concurrently
    def block5():
        time.sleep(5)
        return 1
    q = T.submit(block5)
    m = T.submit(block5)
    
    print q.cancel()  # Will fail, because q is already running
    print m.cancel()  # Will work, because q is blocking the only thread, so m is still queued
    

    In general, whenever you want to have something cancellable you yourself are responsible for making sure that it is.

    There are some off-the-shelf options available though. E.g., consider using asyncio, they also have an example using sleep. The concept circumvents the issue by, whenever any potentially blocking operation is to be called, instead returning control to a control loop running in the outer-most context, together with a note that execution should be continued whenever the result is available - or, in your case, after n seconds have passed.

    0 讨论(0)
  • 2021-02-16 00:22

    As it is written in its link, You can use a with statement to ensure threads are cleaned up promptly, like the below example:

    import concurrent.futures
    import urllib.request
    
    URLS = ['http://www.foxnews.com/',
            'http://www.cnn.com/',
            'http://europe.wsj.com/',
            'http://www.bbc.co.uk/',
            'http://some-made-up-domain.com/']
    
    # Retrieve a single page and report the URL and contents
    def load_url(url, timeout):
        with urllib.request.urlopen(url, timeout=timeout) as conn:
            return conn.read()
    
    # We can use a with statement to ensure threads are cleaned up promptly
    with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
        # Start the load operations and mark each future with its URL
        future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
        for future in concurrent.futures.as_completed(future_to_url):
            url = future_to_url[future]
            try:
                data = future.result()
            except Exception as exc:
                print('%r generated an exception: %s' % (url, exc))
            else:
                print('%r page is %d bytes' % (url, len(data)))
    
    0 讨论(0)
提交回复
热议问题