In python it is possible to share ctypes objects between multiple processes. However I notice that allocating these objects seems to be extremely expensive.
Consider fol
This should be a comment, but I do not have enough reputation :-(
Starting with Python 3.5, shared arrays in Linux are created as temp files mapped to memory (see https://bugs.python.org/issue30919). I think this explains why creating a Numpy array, which is created in memory, is faster than creating and initializing a large shared array. To force Python to use shared memory, a workaround is to execute these two lines of code (ref. No space left while using Multiprocessing.Array in shared memory):
from multiprocessing.process import current_process
current_process()._config[‘tempdir’] = ‘/dev/shm’