I have some functions that interactively load python modules using __import__
I recently stumbled upon some article about an \"import lock\" in Python,
Update: Since Python 3.3, import locks are per-module instead of global, and imp
is deprecated in favor of importlib. More information on the changelog and this issue ticket.
The original answer below predates Python 3.3
Normal imports are thread-safe because they acquire an import lock prior to execution and release it once the import is done. If you add your own custom imports using the hooks available, be sure to add this locking scheme to it. Locking facilities in Python may be accessed by the imp
module (imp.lock_held()
/acquire_lock()
/release_lock()
). Edit: This is deprecated since Python 3.3, no need to manually handle the lock.
Using this import lock won't create any deadlocks or dependency errors aside from the circular dependencies that are already known (module a
imports module b
which imports module a
). Edit: Python 3.3 changed for a per-module locking mechanism to prevent those deadlocks caused by circular imports.
There exist multiple ways to create new processes or threads, for example fork and clone (assuming a Linux environment). Each way yields different memory behaviors when creating the new process. By default, a fork copies most memory segments (Data (often COW), Stack, Code, Heap), effectively not sharing its content between the child and its parent. The result of a clone
(often called a thread, this is what Python uses for threading) shares all memory segments with its parent except the stack. The import mechanism in Python uses the global namespace which is not placed on the stack, thus using a shared segment between its threads. This means that all memory modifications (except for the stack) performed by an import
in a thread will be visible to all its other related threads and parent. If the imported module is Python-only, it is thread-safe by design. If an imported module uses non-Python libraries, make sure those are thread-safe, otherwise it will cause mayhem in your multithreaded Python code.
By the way, threaded programs in Python suffers the GIL which won't allow much performance gains unless your program is I/O bound or rely on C or external thread-safe libraries (since they should release the GIL before executing). Running in two threads the same imported Python function won't execute concurrently because of this GIL. Note that this is only a limitation of CPython and other implementations of Python may have a different behavior.
To answer your edit: imported modules are all cached by Python. If the module is already loaded in the cache, it won't be run again and the import statement (or function) will return right away. You don't have to implement yourself the cache lookup in sys.modules, Python does that for you and won't imp
lock anything, aside from the GIL for the sys.modules lookup.
To answer your second edit: I prefer having to maintain a simpler code than trying to optimize calls to the libraries I use (in this case, the standard library). The rationale is that the time required to perform something is usually way more important than the time required to import the module that does it. Furthermore, the time required to maintain this kind of code throughout the project is way higher than the time it will take to execute. It all boils down to: "programmer time is more valuable than CPU time".
I could not find an answer in official documentation on this but it appears that in some versions of CPython 3.x, __import__
calls are not thread-safe, and can cause a deadlock. See: https://bugs.python.org/issue38884.