Use numpy array in shared memory for multiprocessing

后端 未结 5 1553
隐瞒了意图╮
隐瞒了意图╮ 2020-11-22 03:51

I would like to use a numpy array in shared memory for use with the multiprocessing module. The difficulty is using it like a numpy array, and not just as a ctypes array.

5条回答
  •  无人及你
    2020-11-22 04:02

    The Array object has a get_obj() method associated with it, which returns the ctypes array which presents a buffer interface. I think the following should work...

    from multiprocessing import Process, Array
    import scipy
    import numpy
    
    def f(a):
        a[0] = -a[0]
    
    if __name__ == '__main__':
        # Create the array
        N = int(10)
        unshared_arr = scipy.rand(N)
        a = Array('d', unshared_arr)
        print "Originally, the first two elements of arr = %s"%(a[:2])
    
        # Create, start, and finish the child process
        p = Process(target=f, args=(a,))
        p.start()
        p.join()
    
        # Print out the changed values
        print "Now, the first two elements of arr = %s"%a[:2]
    
        b = numpy.frombuffer(a.get_obj())
    
        b[0] = 10.0
        print a[0]
    

    When run, this prints out the first element of a now being 10.0, showing a and b are just two views into the same memory.

    In order to make sure it is still multiprocessor safe, I believe you will have to use the acquire and release methods that exist on the Array object, a, and its built in lock to make sure its all safely accessed (though I'm not an expert on the multiprocessor module).

提交回复
热议问题