In Python under Windows: I want to run some code in a separate process. And I don\'t want the parent waiting for it to end. Tried this:
from multiprocessing impo
Here is a dirty hack if you must make it work with Process
. Basically you just need to overwrite join
so that when it does get called before the script exists it does nothing rather than block.
from multiprocessing import Process
import time
import os
def info(title):
print(title)
print('module name:', __name__)
print('parent process:', os.getppid())
print('process id:', os.getpid())
def f(name):
info('function f')
time.sleep(3)
print('hello', name)
class EverLastingProcess(Process):
def join(self, *args, **kwargs):
pass
def __del__(self):
pass
if __name__ == '__main__':
info('main line')
p = EverLastingProcess(target=f, args=('bob',), daemon=False)
p.start()
Under Linux you could fork
but this won't work on Windows. I think the easiest way is to run a new Python process, by putting your count_sheeps
in a seperate file and Popen('python count_sheeps.py')
You can declare the process as daemon with p.daemon = True
. As http://docs.python.org/2/library/threading.html#thread-objects says: "The significance of this flag is that the entire Python program exits when only daemon threads are left."
from multiprocessing import Process
from time import sleep
def count_sheeps(number):
"""Count all them sheeps."""
for sheep in range(number):
sleep(1)
if __name__ == "__main__":
p = Process(target=count_sheeps, args=(5,))
p.daemon = True
p.start()
print("Let's just forget about it and quit here and now.")
exit()
Use the subprocess
module as other subprocess control methods (os.system, os.spawn*, os.popen*, popen2., commands.) are being deprecated:
from subprocess import Popen
Popen( [ "foo.exe", "arg1", "arg2", "arg3" )
See the Python doco, especially P_NOWAIT example.
You will have to start a new Python interpreter in the subprocess, so "foo.exe" above will likely be "python.exe".
EDIT:
Having just reviewed the multiprocessing module documentation:
join_thread()
: Join the background thread. This can only be used after close() has been called. It blocks until the background thread exits, ensuring that all data in the buffer has been flushed to the pipe.By default if a process is not the creator of the queue then on exit it will attempt to join the queue’s background thread. The process can call
cancel_join_thread()
to makejoin_thread()
do nothing.
cancel_join_thread()
: Preventjoin_thread()
from blocking. In particular, this prevents the background thread from being joined automatically when the process exits – seejoin_thread()
.
It looks like you should be able to call cancel_join_thread()
to get the behaviour you desire. I've never used this method (and was unaware of it's existence until a minute ago!), so be sure to let us know if it works for you.
You can always start a new thread, and call myNewThread.daemon(True) before invoking it's start() method.
That thread will continue to run when the main process exits.