Demystifying sharedctypes performance

后端 未结 3 2137
南方客
南方客 2021-02-13 05:16

In python it is possible to share ctypes objects between multiple processes. However I notice that allocating these objects seems to be extremely expensive.

Consider fol

3条回答
  •  孤城傲影
    2021-02-13 05:41

    This should be a comment, but I do not have enough reputation :-(

    Starting with Python 3.5, shared arrays in Linux are created as temp files mapped to memory (see https://bugs.python.org/issue30919). I think this explains why creating a Numpy array, which is created in memory, is faster than creating and initializing a large shared array. To force Python to use shared memory, a workaround is to execute these two lines of code (ref. No space left while using Multiprocessing.Array in shared memory):

    from multiprocessing.process import current_process current_process()._config[‘tempdir’] = ‘/dev/shm’

提交回复
热议问题