I am sorry that I can\'t reproduce the error with a simpler example, and my code is too complicated to post. If I run the program in IPython shell instead of the regular Pyt
As others have said multiprocessing
can only transfer Python objects to worker processes which can be pickled. If you cannot reorganize your code as described by unutbu, you can use dill
s extended pickling/unpickling capabilities for transferring data (especially code data) as I show below.
This solution requires only the installation of dill
and no other libraries as pathos
:
import os
from multiprocessing import Pool
import dill
def run_dill_encoded(payload):
fun, args = dill.loads(payload)
return fun(*args)
def apply_async(pool, fun, args):
payload = dill.dumps((fun, args))
return pool.apply_async(run_dill_encoded, (payload,))
if __name__ == "__main__":
pool = Pool(processes=5)
# asyn execution of lambda
jobs = []
for i in range(10):
job = apply_async(pool, lambda a, b: (a, b, a * b), (i, i + 1))
jobs.append(job)
for job in jobs:
print job.get()
print
# async execution of static method
class O(object):
@staticmethod
def calc():
return os.getpid()
jobs = []
for i in range(10):
job = apply_async(pool, O.calc, ())
jobs.append(job)
for job in jobs:
print job.get()