How can I use threading in Python?

前端 未结 19 2716
迷失自我
迷失自我 2020-11-21 04:54

I am trying to understand threading in Python. I\'ve looked at the documentation and examples, but quite frankly, many examples are overly sophisticated and I\'m having trou

相关标签:
19条回答
  • 2020-11-21 05:06

    Since this question was asked in 2010, there has been real simplification in how to do simple multithreading with Python with map and pool.

    The code below comes from an article/blog post that you should definitely check out (no affiliation) - Parallelism in one line: A Better Model for Day to Day Threading Tasks. I'll summarize below - it ends up being just a few lines of code:

    from multiprocessing.dummy import Pool as ThreadPool
    pool = ThreadPool(4)
    results = pool.map(my_function, my_array)
    

    Which is the multithreaded version of:

    results = []
    for item in my_array:
        results.append(my_function(item))
    

    Description

    Map is a cool little function, and the key to easily injecting parallelism into your Python code. For those unfamiliar, map is something lifted from functional languages like Lisp. It is a function which maps another function over a sequence.

    Map handles the iteration over the sequence for us, applies the function, and stores all of the results in a handy list at the end.

    Enter image description here


    Implementation

    Parallel versions of the map function are provided by two libraries:multiprocessing, and also its little known, but equally fantastic step child:multiprocessing.dummy.

    multiprocessing.dummy is exactly the same as multiprocessing module, but uses threads instead (an important distinction - use multiple processes for CPU-intensive tasks; threads for (and during) I/O):

    multiprocessing.dummy replicates the API of multiprocessing, but is no more than a wrapper around the threading module.

    import urllib2
    from multiprocessing.dummy import Pool as ThreadPool
    
    urls = [
      'http://www.python.org',
      'http://www.python.org/about/',
      'http://www.onlamp.com/pub/a/python/2003/04/17/metaclasses.html',
      'http://www.python.org/doc/',
      'http://www.python.org/download/',
      'http://www.python.org/getit/',
      'http://www.python.org/community/',
      'https://wiki.python.org/moin/',
    ]
    
    # Make the Pool of workers
    pool = ThreadPool(4)
    
    # Open the URLs in their own threads
    # and return the results
    results = pool.map(urllib2.urlopen, urls)
    
    # Close the pool and wait for the work to finish
    pool.close()
    pool.join()
    

    And the timing results:

    Single thread:   14.4 seconds
           4 Pool:   3.1 seconds
           8 Pool:   1.4 seconds
          13 Pool:   1.3 seconds
    

    Passing multiple arguments (works like this only in Python 3.3 and later):

    To pass multiple arrays:

    results = pool.starmap(function, zip(list_a, list_b))
    

    Or to pass a constant and an array:

    results = pool.starmap(function, zip(itertools.repeat(constant), list_a))
    

    If you are using an earlier version of Python, you can pass multiple arguments via this workaround).

    (Thanks to user136036 for the helpful comment.)

    0 讨论(0)
  • 2020-11-21 05:08

    Python 3 has the facility of launching parallel tasks. This makes our work easier.

    It has thread pooling and process pooling.

    The following gives an insight:

    ThreadPoolExecutor Example (source)

    import concurrent.futures
    import urllib.request
    
    URLS = ['http://www.foxnews.com/',
            'http://www.cnn.com/',
            'http://europe.wsj.com/',
            'http://www.bbc.co.uk/',
            'http://some-made-up-domain.com/']
    
    # Retrieve a single page and report the URL and contents
    def load_url(url, timeout):
        with urllib.request.urlopen(url, timeout=timeout) as conn:
            return conn.read()
    
    # We can use a with statement to ensure threads are cleaned up promptly
    with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
        # Start the load operations and mark each future with its URL
        future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
        for future in concurrent.futures.as_completed(future_to_url):
            url = future_to_url[future]
            try:
                data = future.result()
            except Exception as exc:
                print('%r generated an exception: %s' % (url, exc))
            else:
                print('%r page is %d bytes' % (url, len(data)))
    

    ProcessPoolExecutor (source)

    import concurrent.futures
    import math
    
    PRIMES = [
        112272535095293,
        112582705942171,
        112272535095293,
        115280095190773,
        115797848077099,
        1099726899285419]
    
    def is_prime(n):
        if n % 2 == 0:
            return False
    
        sqrt_n = int(math.floor(math.sqrt(n)))
        for i in range(3, sqrt_n + 1, 2):
            if n % i == 0:
                return False
        return True
    
    def main():
        with concurrent.futures.ProcessPoolExecutor() as executor:
            for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)):
                print('%d is prime: %s' % (number, prime))
    
    if __name__ == '__main__':
        main()
    
    0 讨论(0)
  • 2020-11-21 05:08

    I would like to contribute with a simple example and the explanations I've found useful when I had to tackle this problem myself.

    In this answer you will find some information about Python's GIL (global interpreter lock) and a simple day-to-day example written using multiprocessing.dummy plus some simple benchmarks.

    Global Interpreter Lock (GIL)

    Python doesn't allow multi-threading in the truest sense of the word. It has a multi-threading package, but if you want to multi-thread to speed your code up, then it's usually not a good idea to use it.

    Python has a construct called the global interpreter lock (GIL). The GIL makes sure that only one of your 'threads' can execute at any one time. A thread acquires the GIL, does a little work, then passes the GIL onto the next thread.

    This happens very quickly so to the human eye it may seem like your threads are executing in parallel, but they are really just taking turns using the same CPU core.

    All this GIL passing adds overhead to execution. This means that if you want to make your code run faster then using the threading package often isn't a good idea.

    There are reasons to use Python's threading package. If you want to run some things simultaneously, and efficiency is not a concern, then it's totally fine and convenient. Or if you are running code that needs to wait for something (like some I/O) then it could make a lot of sense. But the threading library won't let you use extra CPU cores.

    Multi-threading can be outsourced to the operating system (by doing multi-processing), and some external application that calls your Python code (for example, Spark or Hadoop), or some code that your Python code calls (for example: you could have your Python code call a C function that does the expensive multi-threaded stuff).

    Why This Matters

    Because lots of people spend a lot of time trying to find bottlenecks in their fancy Python multi-threaded code before they learn what the GIL is.

    Once this information is clear, here's my code:

    #!/bin/python
    from multiprocessing.dummy import Pool
    from subprocess import PIPE,Popen
    import time
    import os
    
    # In the variable pool_size we define the "parallelness".
    # For CPU-bound tasks, it doesn't make sense to create more Pool processes
    # than you have cores to run them on.
    #
    # On the other hand, if you are using I/O-bound tasks, it may make sense
    # to create a quite a few more Pool processes than cores, since the processes
    # will probably spend most their time blocked (waiting for I/O to complete).
    pool_size = 8
    
    def do_ping(ip):
        if os.name == 'nt':
            print ("Using Windows Ping to " + ip)
            proc = Popen(['ping', ip], stdout=PIPE)
            return proc.communicate()[0]
        else:
            print ("Using Linux / Unix Ping to " + ip)
            proc = Popen(['ping', ip, '-c', '4'], stdout=PIPE)
            return proc.communicate()[0]
    
    
    os.system('cls' if os.name=='nt' else 'clear')
    print ("Running using threads\n")
    start_time = time.time()
    pool = Pool(pool_size)
    website_names = ["www.google.com","www.facebook.com","www.pinterest.com","www.microsoft.com"]
    result = {}
    for website_name in website_names:
        result[website_name] = pool.apply_async(do_ping, args=(website_name,))
    pool.close()
    pool.join()
    print ("\n--- Execution took {} seconds ---".format((time.time() - start_time)))
    
    # Now we do the same without threading, just to compare time
    print ("\nRunning NOT using threads\n")
    start_time = time.time()
    for website_name in website_names:
        do_ping(website_name)
    print ("\n--- Execution took {} seconds ---".format((time.time() - start_time)))
    
    # Here's one way to print the final output from the threads
    output = {}
    for key, value in result.items():
        output[key] = value.get()
    print ("\nOutput aggregated in a Dictionary:")
    print (output)
    print ("\n")
    
    print ("\nPretty printed output: ")
    for key, value in output.items():
        print (key + "\n")
        print (value)
    
    0 讨论(0)
  • 2020-11-21 05:09
    import threading
    import requests
    
    def send():
    
      r = requests.get('https://www.stackoverlow.com')
    
    thread = []
    t = threading.Thread(target=send())
    thread.append(t)
    t.start()
    
    0 讨论(0)
  • 2020-11-21 05:10

    For me, the perfect example for threading is monitoring asynchronous events. Look at this code.

    # thread_test.py
    import threading
    import time
    
    class Monitor(threading.Thread):
        def __init__(self, mon):
            threading.Thread.__init__(self)
            self.mon = mon
    
        def run(self):
            while True:
                if self.mon[0] == 2:
                    print "Mon = 2"
                    self.mon[0] = 3;
    

    You can play with this code by opening an IPython session and doing something like:

    >>> from thread_test import Monitor
    >>> a = [0]
    >>> mon = Monitor(a)
    >>> mon.start()
    >>> a[0] = 2
    Mon = 2
    >>>a[0] = 2
    Mon = 2
    

    Wait a few minutes

    >>> a[0] = 2
    Mon = 2
    
    0 讨论(0)
  • 2020-11-21 05:10

    None of the previous solutions actually used multiple cores on my GNU/Linux server (where I don't have administrator rights). They just ran on a single core.

    I used the lower level os.fork interface to spawn multiple processes. This is the code that worked for me:

    from os import fork
    
    values = ['different', 'values', 'for', 'threads']
    
    for i in range(len(values)):
        p = fork()
        if p == 0:
            my_function(values[i])
            break
    
    0 讨论(0)
提交回复
热议问题