python multiprocessing pool terminate

后端 未结 4 1678
攒了一身酷
攒了一身酷 2020-12-23 23:14

I\'m working on a renderfarm, and I need my clients to be able to launch multiple instances of a renderer, without blocking so the client can receive new commands. I\'ve got

相关标签:
4条回答
  • 2020-12-23 23:39

    Found the answer to my own question. The primary problem was that I was calling a third-party application rather than a function. When I call the subprocess [either using call() or Popen()] it creates a new instance of python whose only purpose is to call the new application. However when python exits, it will kill this new instance of python and leave the application running.

    The solution is to do it the hard way, by finding the pid of the python process that is created, getting the children of that pid, and killing them. This code is specific for osx; there is simpler code (that doesn't rely on grep) available for linux.

    for process in pool:
        processId = process.pid
        print "attempting to terminate "+str(processId)
        command = " ps -o pid,ppid -ax | grep "+str(processId)+" | cut -f 1 -d \" \" | tail -1"
        ps_command = Popen(command, shell=True, stdout=PIPE)
        ps_output = ps_command.stdout.read()
        retcode = ps_command.wait()
        assert retcode == 0, "ps command returned %d" % retcode
        print "child process pid: "+ str(ps_output)
        os.kill(int(ps_output), signal.SIGTERM)
        os.kill(int(processId), signal.SIGTERM)
    
    0 讨论(0)
  • 2020-12-23 23:47

    I found solution: stop pool in separate thread, like this:

    def close_pool():
        global pool
        pool.close()
        pool.terminate()
        pool.join()
    
    def term(*args,**kwargs):
        sys.stderr.write('\nStopping...')
        # httpd.shutdown()
        stophttp = threading.Thread(target=httpd.shutdown)
        stophttp.start()
        stoppool=threading.Thread(target=close_pool)
        stoppool.daemon=True
        stoppool.start()
    
    
    signal.signal(signal.SIGTERM, term)
    signal.signal(signal.SIGINT, term)
    signal.signal(signal.SIGQUIT, term)
    

    Works fine and always i tested.

    signal.SIGINT
    

    Interrupt from keyboard (CTRL + C). Default action is to raise KeyboardInterrupt.

    signal.SIGKILL
    

    Kill signal. It cannot be caught, blocked, or ignored.

    signal.SIGTERM
    

    Termination signal.

    signal.SIGQUIT
    

    Quit with core dump.

    0 讨论(0)
  • 2020-12-23 23:58

    If you're still experiencing this issue, you could try simulating a Pool with daemonic processes (assuming you are starting the pool/processes from a non-daemonic process). I doubt this is the best solution since it seems like your Pool processes should be exiting, but this is all I could come up with. I don't know what your callback does so I'm not sure where to put it in my example below.

    I also suggest trying to create your Pool in __main__ due to my experience (and the docs) with weirdness occurring when processes are spawned globally. This is especially true if you're on Windows: http://docs.python.org/2/library/multiprocessing.html#windows

    from multiprocessing import Process, JoinableQueue
    
    # the function for each process in our pool
    def pool_func(q):
        while True:
            allRenderArg, otherArg = q.get() # blocks until the queue has an item
            try:
                render(allRenderArg, otherArg)
            finally: q.task_done()
    
    # best practice to go through main for multiprocessing
    if __name__=='__main__':
        # create the pool
        pool_size = 2
        pool = []
        q = JoinableQueue()
        for x in range(pool_size):
            pool.append(Process(target=pool_func, args=(q,)))
    
        # start the pool, making it "daemonic" (the pool should exit when this proc exits)
        for p in pool:
            p.daemon = True
            p.start()
    
        # submit jobs to the queue
        for i in range(totalInstances):
            q.put((allRenderArgs[i], args[2]))
    
        # wait for all tasks to complete, then exit
        q.join()
    
    0 讨论(0)
  • 2020-12-23 23:58
    # -*- coding:utf-8 -*-
    import multiprocessing
    import time
    import sys
    import threading
    from functools import partial
    
    
    #> work func
    def f(a,b,c,d,e):
        print('start')
        time.sleep(4)
        print(a,b,c,d,e)
    
    ###########> subProcess func
    #1. start a thead for work func
    #2. waiting thead with a timeout
    #3. exit the subProcess
    ###########
    def mulPro(f, *args, **kwargs):
        timeout = kwargs.get('timeout',None)
    
        #1. 
        t = threading.Thread(target=f, args=args)
        t.setDaemon(True)
        t.start()
        #2. 
        t.join(timeout)
        #3. 
        sys.exit()
    
    if __name__ == "__main__":
    
        p = multiprocessing.Pool(5)
        for i in range(5):
            #1. process the work func with "subProcess func"
            new_f = partial(mulPro, f, timeout=8)
            #2. fire on
            p.apply_async(new_f, args=(1,2,3,4,5),)
    
            # p.apply_async(f, args=(1,2,3,4,5), timeout=2)
        for i in range(10):
            time.sleep(1)
            print(i+1,"s")
    
        p.close()
        # p.join()
    
    0 讨论(0)
提交回复
热议问题