If you are relying on an implementation of Python that has a Global Interpreter Lock (i.e. CPython) and writing multithreaded code, do you really need locks at all?
You still need to use locks (your code could be interrupted at any time to execute another thread and this can cause data inconsistencies). The problem with GIL is that it prevents Python code from using more cores at the same time (or multiple processors if they are available).
Think of it this way:
On a single processor computer, multithreading happens by suspending one thread and starting another fast enough to make it appear to be running at the same time. This is like Python with the GIL: only one thread is ever actually running.
The problem is that the thread can be suspended anywhere, for example, if I want to compute b = (a + b) * 3, this might produce instructions something like this:
1 a += b
2 a *= 3
3 b = a
Now, lets say that is running in a thread and that thread is suspended after either line 1 or 2 and then another thread kicks in and runs:
b = 5
Then when the other thread resumes, b is overwritten by the old computed values, which is probably not what was expected.
So you can see that even though they're not ACTUALLY running at the same time, you still need locking.
This post describes the GIL at a fairly high-level:
Of particular interest are these quotes:
Every ten instructions (this default can be changed), the core releases the GIL for the current thread. At that point, the OS chooses a thread from all the threads competing for the lock (possibly choosing the same thread that just released the GIL – you don't have any control over which thread gets chosen); that thread acquires the GIL and then runs for another ten bytecodes.
and
Note carefully that the GIL only restricts pure Python code. Extensions (external Python libraries usually written in C) can be written that release the lock, which then allows the Python interpreter to run separately from the extension until the extension reacquires the lock.
It sounds like the GIL just provides fewer possible instances for a context switch, and makes multi-core/processor systems behave as a single core, with respect to each python interpreter instance, so yes, you still need to use synchronization mechanisms.
A little bit of update from Will Harris's example:
class Withdraw(threading.Thread):
def run(self):
for _ in xrange(1000000):
global shared_balance
if shared_balance >= 100:
balance = shared_balance
balance -= 100
shared_balance = balance
Put a value check statement in the withdraw and I don't see negative anymore and updates seems consistent. My question is:
If GIL prevents only one thread can be executed at any atomic time, then where would be the stale value? If no stale value, why we need lock? (Assuming we only talk about pure python code)
If I understand correctly, the above condition check wouldn't work in a real threading environment. When more than one threads are executing concurrently, stale value can be created hence the inconsistency of the share state, then you really need a lock. But if python really only allows just one thread at any time (time slicing threading), then there shouldn't be possible for stale value to exist, right?
Adding to the discussion:
Because the GIL exists, some operations are atomic in Python and do not need a lock.
http://www.python.org/doc/faq/library/#what-kinds-of-global-value-mutation-are-thread-safe
As stated by the other answers, however, you still need to use locks whenever the application logic requires them (such as in a Producer/Consumer problem).
You will still need locks if you share state between threads. The GIL only protects the interpreter internally. You can still have inconsistent updates in your own code.
For example:
#!/usr/bin/env python
import threading
shared_balance = 0
class Deposit(threading.Thread):
def run(self):
for _ in xrange(1000000):
global shared_balance
balance = shared_balance
balance += 100
shared_balance = balance
class Withdraw(threading.Thread):
def run(self):
for _ in xrange(1000000):
global shared_balance
balance = shared_balance
balance -= 100
shared_balance = balance
threads = [Deposit(), Withdraw()]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
print shared_balance
Here, your code can be interrupted between reading the shared state (balance = shared_balance
) and writing the changed result back (shared_balance = balance
), causing a lost update. The result is a random value for the shared state.
To make the updates consistent, run methods would need to lock the shared state around the read-modify-write sections (inside the loops) or have some way to detect when the shared state had changed since it was read.