Why is communication via shared memory so much slower than via queues?

前端 未结 1 829
半阙折子戏
半阙折子戏 2021-02-19 04:44

I am using Python 2.7.5 on a recent vintage Apple MacBook Pro which has four hardware and eight logical CPUs; i.e., the sysctl utility gives:

$ sysctl hw.physica         


        
1条回答
  •  闹比i
    闹比i (楼主)
    2021-02-19 05:10

    This is because multiprocessing.Array uses a lock by default to prevent multiple processes from accessing it at once:

    multiprocessing.Array(typecode_or_type, size_or_initializer, *, lock=True)

    ...

    If lock is True (the default) then a new lock object is created to synchronize access to the value. If lock is a Lock or RLock object then that will be used synchronize access to the value. If lock is False then access to the returned object will not be automatically protected by a lock, so it will not necessarily be “process-safe”.

    This means you're not really concurrently writing to the array - only one process can access it at a time. Since your example workers are doing almost nothing but array writes, constantly waiting on this lock badly hurts performance. If you use lock=False when you create the array, the performance is much better:

    With lock=True:

    Now starting process 0
    Now starting process 1
    Now starting process 2
    Now starting process 3
    4000000 random numbers generated in 4.811205 seconds
    

    With lock=False:

    Now starting process 0
    Now starting process 3
    Now starting process 1
    Now starting process 2
    4000000 random numbers generated in 0.192473 seconds
    

    Note that using lock=False means you need to manually protect access to the Array whenever you're doing something that isn't process-safe. Your example is having processes write to unique parts, so it's ok. But if you were trying to read from it while doing that, or had different processes write to overlapping parts, you would need to manually acquire a lock.

    0 讨论(0)
提交回复
热议问题