问题
I am using python 3.6.7 with Ubuntu 18.04
After running the following script in which every process has its own shared lock :
from multiprocessing import Process, Manager
def foo(l1):
with l1:
print('lol')
if __name__ == '__main__':
processes = []
with Manager() as manager:
for cluster in range(10):
lock1 = manager.Lock()
calc_args = (lock1, )
processes.append(Process(target=foo,
args=calc_args))
for p in processes:
p.start()
for p in processes:
p.join()
I have strange exception:
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "temp.py", line 5, in foo
with l1:
File "/usr/lib/python3.6/multiprocessing/managers.py", line 991, in __enter__
return self._callmethod('acquire')
File "/usr/lib/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
multiprocessing.managers.RemoteError:
---------------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/managers.py", line 235, in serve_client
self.id_to_local_proxy_obj[ident]
KeyError: '7f49974624e0'
Any idea or suggestion how to fix this problem?
Thank you
回答1:
For some reason, you have to keep the original reference to whatever you got from SyncManager, see below:
from multiprocessing import Manager, Process, current_process
from multiprocessing.managers import AcquirerProxy, SyncManager
def foo(lock: AcquirerProxy):
lock.acquire()
print('pid={}'.format(current_process().pid))
if __name__ == '__main__':
manager: SyncManager = Manager()
locks = []
for i in range(3):
# Always keep the reference in some variable
locks.append(manager.Lock())
processes = []
for i in range(3):
p = Process(target=foo, args=[locks[i]])
processes.append(p)
# If you clear the list, which you lose the reference, it won't work
# locks.clear()
for p in processes:
p.start()
for p in processes:
p.join()
Sorry being super late to the party, but I hope it helps!
---------- Original Response Below: ----------
Hey this problem is still there in Python 3.8.2
I managed to reproduce the error:
from multiprocessing import Process, Manager, current_process
from multiprocessing.managers import AcquirerProxy
def foo(lock: AcquirerProxy):
lock.acquire()
print('pid={}'.format(current_process().pid))
if __name__ == '__main__':
manager = Manager()
process1 = Process(target=foo, args=[manager.Lock()])
process1.start()
process1.join()
But if I take out manager.Lock()
, it works fine!
from multiprocessing import Process, Manager, current_process
from multiprocessing.managers import AcquirerProxy
def foo(lock: AcquirerProxy):
lock.acquire()
print('pid={}'.format(current_process().pid))
if __name__ == '__main__':
manager = Manager()
lock1 = manager.Lock() # Here
process1 = Process(target=foo, args=[lock1])
process1.start()
process1.join()
Really confused why taking out the lock makes a difference
来源:https://stackoverflow.com/questions/57299893/why-python-throws-multiprocessing-managers-remoteerror-for-shared-lock