i've recently encountered a problem trying to share large data among several processors using the command 'send' from the mpi4py-library. Even a 1000x3 numpy float array is too large to be sent. Any ideas how to overcome this problem?
thx in advance.
I've found a simple solution. Divide data into small enough chunks...
I encountered this same problem with Isend
(not with Send
). It appears that the problem was due to the sending process terminating before the receiver had received the data. I fixed this by including a comm.barrier()
call at the end of each of the processes.
Point-to-point send/recv of large data works:
#!/usr/bin/env python
from mpi4py import MPI
import numpy
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank == 0:
data = numpy.arange(300*100000, dtype='f')
comm.Send([data, MPI.FLOAT], dest=1, tag=77)
elif rank == 1:
data = numpy.empty(300*100000, dtype='f')
comm.Recv([data, MPI.FLOAT], source=0, tag=77)
print data
Running this with two processors:
% ~/work/soft/mpich/bin/mpiexec -np 2 ./send-numpy.py
[ 0.00000000e+00 1.00000000e+00 2.00000000e+00 ..., 2.99999960e+07
2.99999980e+07 3.00000000e+07]
来源:https://stackoverflow.com/questions/21528275/mpi4py-hangs-when-trying-to-send-large-data