multiprocessing.Pool: calling helper functions when using apply_async's callback option

前端 未结 1 1903
眼角桃花
眼角桃花 2021-02-06 16:00

How does the flow of apply_async work between calling the iterable (?) function and the callback function?

Setup: I am reading some lines of all the files i

相关标签:
1条回答
  • 2021-02-06 16:06

    You're asking about a whole bunch of different things here, so I'll try to cover it all as best I can:

    The function you pass to callback will be executed in the main process (not the worker) as soon as the worker process returns its result. It is executed in a thread that the Pool object creates internally. That thread consumes objects from a result_queue, which is used to get the results from all the worker processes. After the thread pulls the result off the queue, it executes the callback. While your callback is executing, no other results can be pulled from the queue, so its important that the callback finishes quickly. With your example, as soon as one of the calls to X or Y you make via apply_async completes, the result will be placed into the result_queue by the worker process, and then the result-handling thread will pull the result off of the result_queue, and your callback will be executed.

    Second, I suspect the reason you're not seeing anything happen with your example code is because all of your worker function calls are failing. If a worker function fails, callback will never be executed. The failure won't be reported at all unless you try to fetch the result from the AsyncResult object returned by the call to apply_async. However, since you're not saving any of those objects, you'll never know the failures occurred. If I were you, I'd try using pool.apply while you're testing so that you see errors as soon as they occur.

    The reason the workers are probably failing (at least in the example code you provided) is because X and Y are defined as function inside another function. multiprocessing passes functions and objects to worker processes by pickling them in the main process, and unpickling them in the worker processes. Functions defined inside other functions are not picklable, which means multiprocessing won't be able to successfully unpickle them in the worker process. To fix this, define both functions at the top-level of your module, rather than embedded insice the dirwalker function.

    You should definitely continue to call Z from X and Y, not in results. That way, Z can be run concurrently across all your worker processes, rather than having to be run one call at a time in your main process. And remember, your callback function is supposed to be as quick as possible, so you don't hold up processing results. Executing Z in there would slow things down.

    Here's some simple example code that's similar to what you're doing, that hopefully gives you an idea of what your code should look like:

    import multiprocessing as mp
    import os
    
    # X() reads files and grabs lines, calls helper function to calculate
    # info, and returns stuff to the callback function
    def X(f): 
        fileinfo = Z(f) 
        return fileinfo 
    
    # Y() reads other types of files and does the same thing
    def Y(f): 
        fileinfo = Z(f)
        return fileinfo
    
    # helper function
    def Z(arr):
        return arr + "zzz"
    
    def dirwalker(directory):
        ahlala = []
    
        # results() is the callback function
        def results(r):
            ahlala.append(r) # or .append, haven't yet decided
    
        for _,_,files in os.walk(directory):
            pool = mp.Pool(mp.cpu_count())
            for f in files:
                if len(f) > 5: # Just an arbitrary thing to split up the list with
                    pool.apply_async(X, args=(f,), callback=results)  # ,error_callback=handle_error # In Python 3, there's an error_callback you can use to handle errors. It's not available in Python 2.7 though :(
                else:
                    pool.apply_async(Y, args=(f,), callback=results)
    
        pool.close()
        pool.join()
        return ahlala
    
    
    if __name__ == "__main__":
        print(dirwalker("/usr/bin"))
    

    Output:

    ['ftpzzz', 'findhyphzzz', 'gcc-nm-4.8zzz', 'google-chromezzz' ... # lots more here ]
    

    Edit:

    You can create a dict object that's shared between your parent and child processes using the multiprocessing.Manager class:

    pool = mp.Pool(mp.cpu_count())
    m = multiprocessing.Manager()
    helper_dict = m.dict()
    for f in files:
        if len(f) > 5:
            pool.apply_async(X, args=(f, helper_dict), callback=results)
        else:
            pool.apply_async(Y, args=(f, helper_dict), callback=results)
    

    Then make X and Y take a second argument called helper_dict (or whatever name you want), and you're all set.

    The caveat is that this worked by creating a server process that contains a normal dict, and all your other processes talk to that one dict via a Proxy object. So every time you read or write to the dict, you're doing IPC. This makes it a lot slower than a real dict.

    0 讨论(0)
提交回复
热议问题