How many processes should I run in parallel?

前端 未结 1 1649
[愿得一人]
[愿得一人] 2021-02-01 19:42

I have a parallelized task that reads stuff from multiple files, and writes it out the information to several files.

The idiom I am currently using to parallelize stuff:

相关标签:
1条回答
  • 2021-02-01 20:10

    Always separate the number of processes from the number of tasks. There's no reason why the two should be identical, and by making the number of processes a variable, you can experiment to see what works well for your particular problem. No theoretical answer is as good as old-fashioned get-your-hands-dirty benchmarking with real data.

    Here's how you could do it using a multiprocessing Pool:

    import multiprocessing as mp
    
    num_workers = mp.cpu_count()  
    
    pool = mp.Pool(num_workers)
    for task in tasks:
        pool.apply_async(func, args = (task,))
    
    pool.close()
    pool.join()
    

    pool = mp.Pool(num_workers) will create a pool of num_workers subprocesses. num_workers = mp.cpu_count() will set num_workers equal to the number of CPU cores. You can experiment by changing this number. (Note that pool = mp.Pool() creates a pool of N subprocesses, where N equals mp.cpu_count() by default.)

    If a problem is CPU-bound, there is no benefit to setting num_workers to a number bigger than the number of cores, since the machine can't have more processes operating concurrently than the number of cores. Moreover, switching between the processes may make performance worse if num_workers exceeds the number of cores.

    If a problem is IO-bound -- which yours might be since they are doing file IO -- it may make sense to have num_workers exceed the number of cores, if your IO device(s) can handle more concurrent tasks than you have cores. However, if your IO is sequential in nature -- if, for example, there is only one hard drive with only one read/write head -- then all but one of your subprocesses may be blocked waiting for the IO device. In this case no concurrency is possible and using multiprocessing in this case is likely to be slower than the equivalent sequential code.

    0 讨论(0)
提交回复
热议问题