I am trying to split the following code to allow for multiprocessing in python and it is really becoming a frustrating task for me - I am new to multiprocessing and have read the documentation and as many samples as I could find but still have not found a solution that will have it work on all cpu cores at one time.
I would like to split the iterables into quarters and have it compute the test in parrallel.
My single thread example:
import itertools as it
import numpy as np
wmod = np.array([[0,1,2],[3,4,5],[6,7,3]])
pmod = np.array([[0,1,2],[3,4,5],[6,7,3]])
plines1 = it.product(wmod[0],wmod[1],wmod[2])
plines2 = it.product(pmod[0],pmod[1],pmod[2])
check = .915
result = []
for count, (A,B) in enumerate(zip(plines1,plines2)):
pass
test = (sum(B)+10)/(sum(A)+12)
if test > check:
result = np.append(result,[A,B])
print('results: ',result)
I realize this is a very small example of a pair of 3x3 matrices, but I would like to apply it to a pair of matrices that are larger, and take about an hour to compute. I appreciate any advice given.
I would suggest using queues to dump your iterables. Something like that:
import multiprocessing as mp
import numpy as np
import itertools as it
def worker(in_queue, out_queue):
check = 0.915
for a in iter(in_queue.get, 'STOP'):
A = a[0]
B = a[1]
test = (sum(B)+10)/(sum(A)+12)
if test > check:
out_queue.put([A,B])
else:
out_queue.put('')
if __name__ == "__main__":
wmod = np.array([[0,1,2],[3,4,5],[6,7,3]])
pmod = np.array([[0,1,2],[3,4,5],[6,7,3]])
plines1 = it.product(wmod[0],wmod[1],wmod[2])
plines2 = it.product(pmod[0],pmod[1],pmod[2])
# determine length of your iterator
counts = 26
# setup iterator
it = zip(plines1,plines2)
in_queue = mp.Queue()
out_queue = mp.Queue()
# setup workers
numProc = 2
process = [mp.Process(target=worker,
args=(in_queue, out_queue), daemon=True) for x in range(numProc)]
# run processes
for p in process:
p.start()
results = []
control = True
# fill queue and get data
# code fills the queue until a new element is available in the output
# fill blocks if no slot is available in the in_queue
for idx in range(counts):
while out_queue.empty() and control:
# fill the queue
try:
in_queue.put(next(it), block=True)
except StopIteration:
# signals for processes stop
for p in process:
print('stopping')
in_queue.put('STOP')
control = False
break
results.append(out_queue.get(timeout=10))
# wait for processes to finish
for p in process:
p.join()
print(results)
print('finished')
However, you would have to determine first how long your task list will be.
来源:https://stackoverflow.com/questions/42604921/multiprocessing-an-iterable-in-python