Why do we need locks for threads, if we have GIL?

前端 未结 4 1749
心在旅途
心在旅途 2021-01-31 15:53

I believe it is a stupid question but I still can\'t find it. Actually it\'s better to separate it into two questions:

1) Am I right that we could have a lot of threads

相关标签:
4条回答
  • 2021-01-31 16:27

    the GIL does not protect you from modification of the internal states of the objects that you are accessing concurrently from different threads, meaning that you can still mess things up if you don't take measures.

    So, despite the fact that two threads may not be running at the same exact time, they can still be trying to manipulate the internal state of an object (one at a time, intermittently), and if you don't prevent that from happening (with some locking mechanism) your code could/will eventually fail.

    Regards.

    0 讨论(0)
  • 2021-01-31 16:28

    At any moment, yes, only one thread is executing Python code (other threads may be executing some IO, NumPy, whatever). That is mostly true. However, this is trivially true on any single-processor system, and yet people still need locks on single-processor systems.

    Take a look at the following code:

    queue = []
    def do_work():
        while queue:
            item = queue.pop(0)
            process(item)
    

    With one thread, everything is fine. With two threads, you might get an exception from queue.pop() because the other thread called queue.pop() on the last item first. So you would need to handle that somehow. Using a lock is a simple solution. You can also use a proper concurrent queue (like in the queue module)--but if you look inside the queue module, you'll find that the Queue object has a threading.Lock() inside it. So either way you are using locks.

    It is a common newbie mistake to write multithreaded code without the necessary locks. You look at code and think, "this will work just fine" and then find out many hours later that something truly bizarre has happened because threads weren't synchronized properly.

    Or in short, there are many places in a multithreaded program where you need to prevent another thread from modifying a structure until you're done applying some changes. This allows you to maintain the invariants on your data, and if you can't maintain invariants, then it's basically impossible to write code that is correct.

    Or put in the shortest way possible, "You don't need locks if you don't care if your code is correct."

    0 讨论(0)
  • 2021-01-31 16:44

    the GIL prevents simultaneous execution of multiple threads, but not in all situations.

    The GIL is temporarily released during I/O operations executed by threads. That means, multiple threads can run at the same time. That's one reason you still need locks.

    I don't know where I found this reference.... in a video or something - hard to look it up, but you can investigate further yourself

    UPDATE:

    The few thumbs down I got signal to me that people think memory is not a good enough reference, and google not a good enough database. While I'd disagree with that, let me provide one of the first URLs I looked up (and checked!), so the people who disliked my answer can live happily from how on: https://wiki.python.org/moin/GlobalInterpreterLock

    0 讨论(0)
  • 2021-01-31 16:45

    GIL protects the Python interals. That means:

    1. you don't have to worry about something in the interpreter going wrong because of multithreading
    2. most things do not really run in parallel, because python code is executed sequentially due to GIL

    But GIL does not protect your own code. For example, if you have this code:

    self.some_number += 1
    

    That is going to read value of self.some_number, calculate some_number+1 and then write it back to self.some_number.

    If you do that in two threads, the operations (read, add, write) of one thread and the other may be mixed, so that the result is wrong.

    This could be the order of execution:

    1. thread1 reads self.some_number (0)
    2. thread2 reads self.some_number (0)
    3. thread1 calculates some_number+1 (1)
    4. thread2 calculates some_number+1 (1)
    5. thread1 writes 1 to self.some_number
    6. thread2 writes 1 to self.some_number

    You use locks to enforce this order of execution:

    1. thread1 reads self.some_number (0)
    2. thread1 calculates some_number+1 (1)
    3. thread1 writes 1 to self.some_number
    4. thread2 reads self.some_number (1)
    5. thread2 calculates some_number+1 (2)
    6. thread2 writes 2 to self.some_number

    EDIT: Let's complete this answer with some code which shows the explained behaviour:

    import threading
    import time
    
    total = 0
    lock = threading.Lock()
    
    def increment_n_times(n):
        global total
        for i in range(n):
            total += 1
    
    def safe_increment_n_times(n):
        global total
        for i in range(n):
            lock.acquire()
            total += 1
            lock.release()
    
    def increment_in_x_threads(x, func, n):
        threads = [threading.Thread(target=func, args=(n,)) for i in range(x)]
        global total
        total = 0
        begin = time.time()
        for thread in threads:
            thread.start()
        for thread in threads:
            thread.join()
        print('finished in {}s.\ntotal: {}\nexpected: {}\ndifference: {} ({} %)'
               .format(time.time()-begin, total, n*x, n*x-total, 100-total/n/x*100))
    

    There are two functions which implement increment. One uses locks and the other does not.

    Function increment_in_x_threads implements parallel execution of the incrementing function in many threads.

    Now running this with a big enough number of threads makes it almost certain that an error will occur:

    print('unsafe:')
    increment_in_x_threads(70, increment_n_times, 100000)
    
    print('\nwith locks:')
    increment_in_x_threads(70, safe_increment_n_times, 100000)
    

    In my case, it printed:

    unsafe:
    finished in 0.9840562343597412s.
    total: 4654584
    expected: 7000000
    difference: 2345416 (33.505942857142855 %)
    
    with locks:
    finished in 20.564176082611084s.
    total: 7000000
    expected: 7000000
    difference: 0 (0.0 %)
    

    So without locks, there were many errors (33% of increments failed). On the other hand, with locks it was 20 times slower.

    Of course, both numbers are blown up because I used 70 threads, but this shows the general idea.

    0 讨论(0)
提交回复
热议问题