why is multiprocess Pool slower than a for loop?

后端 未结 3 896
滥情空心
滥情空心 2020-12-17 03:58
from multiprocessing import Pool

def op1(data):
    return [data[elem] + 1 for elem in range(len(data))]
data = [[elem for elem in range(20)] for elem in range(5000         


        
相关标签:
3条回答
  • 2020-12-17 04:39

    Short answer: Yes, the operations will usually be done on (a subset of) the available cores. But the communication overhead is large. In your example the workload is too small compared to the overhead.

    In case you construct a pool, a number of workers will be constructed. If you then instruct to map given input. The following happens:

    1. the data will be split: every worker gets an approximately fair share;
    2. the data will be communicated to the workers;
    3. every worker will process their share of work;
    4. the result is communicated back to the process; and
    5. the main process groups the results together.

    Now splitting, communicating and joining data are all processes that are carried out by the main process. These can not be parallelized. Since the operation is fast (O(n) with input size n), the overhead has the same time complexity.

    So complexitywise even if you had millions of cores, it would not make much difference, because communicating the list is probably already more expensive than computing the results.

    That's why you should parallelize computationally expensive tasks. Not straightforward tasks. The amount of processing should be large compared to the amount of communicating.

    In your example, the work is trivial: you add 1 to all the elements. Serializing however is less trivial: you have to encode the lists you send to the worker.

    0 讨论(0)
  • 2020-12-17 04:52

    As others have noted, the overhead that you pay to facilitate multiprocessing is more than the time-savings gained by parallelizing across multiple cores. In other words, your function op1() does not require enough CPU resources to see performance gain from parallelizing.

    In the multiprocessing.Pool class, the majority of this overheard is spent serializing and deserializing data before the data is shuttled between the parent process (which creates the Pool) and the children "worker" processes.

    This blog post explores, in greater detail, how expensive pickling (serializing) can be when using the multiprocessing.Pool module.

    0 讨论(0)
  • 2020-12-17 05:00

    There are a couple of potential trouble spots with your code, but primarily it's too simple.

    The multiprocessing module works by creating different processes, and communicating among them. For each process created, you have to pay the operating system's process startup cost, as well as the python startup cost. Those costs can be high, or low, but they're non-zero in any case.

    Once you pay those startup costs, you then pool.map the worker function across all the processes. Which basically adds 1 to a few numbers. This is not a significant load, as your tests prove.

    What's worse, you're using .map() which is implicitly ordered (compare with .imap_unordered()), so there's synchronization going on - leaving even less freedom for the various CPU cores to give you speed.

    If there's a problem here, it's a "design of experiment" problem - you haven't created a sufficiently difficult problem for multiprocessing to be able to help you.

    0 讨论(0)
提交回复
热议问题