I use python.multiprocessing.sharedctypes.RawArray
to share large numpy arrays between multiple processes. And I\'ve noticed that when this array is large (> 1 or
After more research I've found that python actually creates folders in /tmp
which are starting with pymp-
, and though no files are visible within them using file viewers, it looks exatly like /tmp/
is used by python for shared memory. Performance seems to be decreasing when file cashes are flushed.
The working solution in the end was to mount /tmp
as tmpfs
:
sudo mount -t tmpfs tmpfs /tmp
And, if using the latest docker, by providing --tmpfs /tmp
argument to the docker run
command.
After doing this, read/write operations are done in RAM, and performance is fast and stable.
I still wonder why /tmp
is used for shared memory, not /dev/shm
which is already monted as tmpfs
and is supposed to be used for shared memory.