I have a python program that runs a Monte Carlo simulation to find answers to probability questions. I am using multiprocessing and here it is in pseudo code
Normal global variables are not shared between processes the way they are shared between threads. You need to use a process-aware data structure. For your use-case, a multiprocessing.Value should work fine:
import multiprocessing
def runmycode(result_queue, iterations):
print("Requested...")
while 1==1: # This is an infinite loop, so I assume you want something else here
with iterations.get_lock(): # Need a lock because incrementing isn't atomic
iterations.value += 1
if "result found (for example)":
result_queue.put("result!")
print("Done")
if __name__ == "__main__":
processs = []
result_queue = multiprocessing.Queue()
iterations = multiprocessing.Value('i', 0)
for n in range(4): # start 4 processes
process = multiprocessing.Process(target=runmycode, args=(result_queue, iterations))
process.start()
processs.append(process)
print("Waiting for result...")
result = result_queue.get() # wait
for process in processs: # then kill them all off
process.terminate()
print("Got result: {}".format(result))
print("Total iterations {}".format(iterations.value))
A few notes:
Value
to the children, to keep the code compatible with Windows, which can't share read/write global variables between parent and children.if __name__ == "__main__":
guard, again to help with Windows compatibility, and just as a general best practice.