I want to run a cpu intensive program in Python across multiple cores and am trying to figure out how to write C extensions to do this. Are there any code samples or tutorials
Have you considered using one of the python mpi libraries like mpi4py? Although MPI is normally used to distribute work across a cluster, it works quite well on a single multicore machine. The downside is that you'll have to refactor your code to use MPI's communication calls (which may be easy).
Take a look at multiprocessing. It's an often overlooked fact that not globally sharing data, and not cramming loads of threads into a single process is what operating systems prefer.
If you still insist that your CPU intensive behaviour requires threading, take a look at the documentation for working with the GIL in C. It's quite informative.
multiprocessing is easy. if thats not fast enough, your question is complicated.
This is a good use of C extension. The keyword you should search for is Py_BEGIN_ALLOW_THREADS
.
http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock
P.S. I mean if you processing is already in C, like imaging processing, then release the lock in C extension is good. If your processing code is mainly in Python, other people's suggestion to multiprocessing
is better. It is usually not justify to rewrite the code in C for background processing.
You can already break a Python program into multiple processes. The OS will already allocate your processes across all the cores.
Do this.
python part1.py | python part2.py | python part3.py | ... etc.
The OS will assure that part uses as many resources as possible. You can trivially pass information along this pipeline by using cPickle
on sys.stdin
and sys.stdout
.
Without too much work, this can often lead to dramatic speedups.
Yes -- to the haterz -- it's possible to construct an algorithm so tortured that it may not be sped up much. However, this often yields huge benefits for minimal work.
And.
The restructuring for this purpose will exactly match the restructuring required to maximize thread concurrency. So. Start with shared-nothing process parallelism until you can prove that sharing more data would help, then move to the more complex shared-everything thread parallelism.