mpi4py hangs when trying to send large data

久未见 提交于 2019-12-05 09:55:00

I've found a simple solution. Divide data into small enough chunks...

I encountered this same problem with Isend (not with Send). It appears that the problem was due to the sending process terminating before the receiver had received the data. I fixed this by including a comm.barrier() call at the end of each of the processes.

Point-to-point send/recv of large data works:

#!/usr/bin/env python
from mpi4py import MPI
import numpy

comm = MPI.COMM_WORLD
rank = comm.Get_rank()

if rank == 0:
    data = numpy.arange(300*100000, dtype='f')
    comm.Send([data, MPI.FLOAT], dest=1, tag=77)
elif rank == 1:
    data = numpy.empty(300*100000, dtype='f')
    comm.Recv([data, MPI.FLOAT], source=0, tag=77)
    print data

Running this with two processors:

% ~/work/soft/mpich/bin/mpiexec -np 2 ./send-numpy.py
[  0.00000000e+00   1.00000000e+00   2.00000000e+00 ...,   2.99999960e+07
   2.99999980e+07   3.00000000e+07]
标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!