Writing to shared memory in Python is very slow

后端 未结 1 1696
面向向阳花
面向向阳花 2021-01-24 05:30

I use python.multiprocessing.sharedctypes.RawArray to share large numpy arrays between multiple processes. And I\'ve noticed that when this array is large (> 1 or

相关标签:
1条回答
  • 2021-01-24 05:43

    After more research I've found that python actually creates folders in /tmp which are starting with pymp-, and though no files are visible within them using file viewers, it looks exatly like /tmp/ is used by python for shared memory. Performance seems to be decreasing when file cashes are flushed.

    The working solution in the end was to mount /tmp as tmpfs:

    sudo mount -t tmpfs tmpfs /tmp
    

    And, if using the latest docker, by providing --tmpfs /tmp argument to the docker run command.

    After doing this, read/write operations are done in RAM, and performance is fast and stable.

    I still wonder why /tmp is used for shared memory, not /dev/shm which is already monted as tmpfs and is supposed to be used for shared memory.

    0 讨论(0)
提交回复
热议问题