问题
I have a class with the following method:
def get_add_new_links(self, max_num_links):
self.get_links_m2(max_num_links)
processes = mp.cpu_count()
pool = mp.Pool(processes=processes)
func = partial(worker, self)
with open(os.path.join(self.report_path, "links.txt"), "r") as f:
reports = pool.map(func, f.readlines())
pool.close()
pool.join()
where get_links_m2
is another method that creates the file "links.txt". The worker is:
def worker(obje, link):
doc, rep = obje.get_info_m2(link)
obje.add_new_active(doc, sure_not_exists=True)
return rep
The method get_info_m2
visits the link and extracts some information. The method add_new_active
adds the information to a MongoDB.
What could be wrong with my code? when I run it I get this error (and traceback):
File "controller.py", line 234, in get_add_new_links
reports = pool.map(func, f.readlines()) File "/home/vladimir/anaconda3/lib/python3.5/multiprocessing/pool.py", line
260, in map
return self._map_async(func, iterable, mapstar, chunksize).get() File "/home/vladimir/anaconda3/lib/python3.5/multiprocessing/pool.py",
line 608, in get
raise self._value File "/home/vladimir/anaconda3/lib/python3.5/multiprocessing/pool.py", line
385, in _handle_tasks
put(task) File "/home/vladimir/anaconda3/lib/python3.5/multiprocessing/connection.py",
line 206, in send
self._send_bytes(ForkingPickler.dumps(obj)) File "/home/vladimir/anaconda3/lib/python3.5/multiprocessing/reduction.py",
line 50, in dumps
cls(buf, protocol).dump(obj) TypeError: can't pickle _thread.lock objects
回答1:
As stated in the docs:
Never do this:
client = pymongo.MongoClient()
# Each child process attempts to copy a global MongoClient
# created in the parent process. Never do this.
def func():
db = client.mydb
# Do something with db.
proc = multiprocessing.Process(target=func)
proc.start()
Instead, inside the worker function a client must be initialized.
来源:https://stackoverflow.com/questions/41071563/python-multiprocessing-cant-pickle-thread-lock-pymongo