MPI4Py causes error on send/recv

╄→尐↘猪︶ㄣ 提交于 2020-01-15 07:47:09

问题


Can someone tell me why this minimal working example (MWE) complains of TypeError: expected a writeable buffer object?

MWE:

#!/usr/bin/env python
from mpi4py import MPI

# MPI Initialization
rank = MPI.COMM_WORLD.Get_rank()
comm = MPI.COMM_WORLD

if __name__ == '__main__':
   a = True
   if rank == 0:
      a = False
      comm.Send ( [ a, MPI.BOOL ], 1, 111 )
   if rank == 1:
      comm.Recv ([ a, MPI.BOOL], 0, 111 )

Error:

Traceback (most recent call last):
  File "test.py", line 14, in <module>
    comm.Recv ([ a, MPI.BOOL], 0, 111 )
  File "Comm.pyx", line 143, in mpi4py.MPI.Comm.Recv (src/mpi4py.MPI.c:62980)
  File "message.pxi", line 323, in mpi4py.MPI.message_p2p_recv (src/mpi4py.MPI.c:22814)
  File "message.pxi", line 309, in mpi4py.MPI._p_msg_p2p.for_recv (src/mpi4py.MPI.c:22665)
  File "message.pxi", line 111, in mpi4py.MPI.message_simple (src/mpi4py.MPI.c:20516)
  File "message.pxi", line 51, in mpi4py.MPI.message_basic (src/mpi4py.MPI.c:19644)
  File "asbuffer.pxi", line 108, in mpi4py.MPI.getbuffer (src/mpi4py.MPI.c:6757)
  File "asbuffer.pxi", line 48, in mpi4py.MPI.PyObject_GetBufferEx (src/mpi4py.MPI.c:6081)
TypeError: expected a writeable buffer object
Traceback (most recent call last):
  File "test.py", line 12, in <module>
    comm.Send ( [ a, MPI.BOOL ], 1, 111 )
  File "Comm.pyx", line 132, in mpi4py.MPI.Comm.Send (src/mpi4py.MPI.c:62796)
  File "message.pxi", line 318, in mpi4py.MPI.message_p2p_send (src/mpi4py.MPI.c:22744)
  File "message.pxi", line 301, in mpi4py.MPI._p_msg_p2p.for_send (src/mpi4py.MPI.c:22604)
  File "message.pxi", line 111, in mpi4py.MPI.message_simple (src/mpi4py.MPI.c:20516)
  File "message.pxi", line 51, in mpi4py.MPI.message_basic (src/mpi4py.MPI.c:19644)
  File "asbuffer.pxi", line 108, in mpi4py.MPI.getbuffer (src/mpi4py.MPI.c:6757)
  File "asbuffer.pxi", line 50, in mpi4py.MPI.PyObject_GetBufferEx (src/mpi4py.MPI.c:6093)

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   EXIT CODE: 1
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0@raspi1] HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:886): assert (!closed) failed
[proxy:0:0@raspi1] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[proxy:0:0@raspi1] main (./pm/pmiserv/pmip.c:206): demux engine error waiting for event
[mpiexec@raspi1] HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting
[mpiexec@raspi1] HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
[mpiexec@raspi1] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:217): launcher returned error waiting for completion
[mpiexec@raspi1] main (./ui/mpich/mpiexec.c:331): process manager error waiting for completion

回答1:


I'm not an MPI expert, but I wonder if the boolean datatype in numpy is not compatible with the boolean datatype in C? Perhaps this is what causes the error. (Not proof, but some evidence: http://docs.scipy.org/doc/numpy/reference/arrays.scalars.html#arrays-scalars-built-in and https://cython.readthedocs.org/en/latest/src/tutorial/numpy.html)

puk, as you noted, one solution is to transfer the data as python objects using the sendsend and recv (with lowercase s and r) functions (http://mpi4py.scipy.org/docs/usrman/tutorial.html). "Under the hood," mpi4py uses pickle for this, and therefore any generic python object can be sent.

My main reason to answer is to post an alternative solution using integer arrays, with 0 corresponding to True and 1 to False:

#!/usr/bin/env python
import numpy as np
from mpi4py import MPI

# MPI Initialization
rank = MPI.COMM_WORLD.Get_rank()
comm = MPI.COMM_WORLD

if __name__ == '__main__':
    a=np.array([0,])
    if rank == 0:
        a[0]=1
        comm.Send( [ a, MPI.INT ], 1, tag=111 )
        print rank,a
    if rank == 1:
        comm.Recv([ a, MPI.INT], 0, tag=111 )
        print rank,a

in case one wants to take advantage of the faster (according to the mpi4py docs) numpy arrays.




回答2:


I have no idea why I am getting the above error, so if anyone does know, please answer and I will accept. That being said, I can get the code to work if I use this style instead (old code commented out):

MWE:

#!/usr/bin/env python
from mpi4py import MPI

# MPI Initialization
rank = MPI.COMM_WORLD.Get_rank()
comm = MPI.COMM_WORLD

if __name__ == '__main__':
   a = True
   if rank == 0:
      a = False
      # comm.Send ( [ a, MPI.BOOL ], dest=1, tag=111 )
      comm.send ( a, dest=1, tag=111 )
   if rank == 1:
      # comm.Recv ([ a, MPI.BOOL], dest=0, tag=111 )
      a = comm.recv (source=0, tag=111 )



回答3:


The uppercase functions of MPI accept butter like objects such as NumPy arrays in python. The lowercase functions use pickle in order to being able to send the objects.

http://mpi4py.readthedocs.org/en/latest/tutorial.html



来源:https://stackoverflow.com/questions/19371239/mpi4py-causes-error-on-send-recv

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!