Send and Receive operations between communicators in MPI

喜欢而已 提交于 2020-01-06 18:46:20

问题


Following my previous question : Unable to implement MPI_Intercomm_create

The problem of MPI_INTERCOMM_CREATE has been solved. But when I try to implement a basic send receive operations between process 0 of color 0 (globally rank = 0) and process 0 of color 1 (ie globally rank = 2), the code just hangs up after printing received buffer. the code:

program hello
include 'mpif.h'
implicit none 
integer tag,ierr,rank,numtasks,color,new_comm,inter1,inter2
integer sendbuf,recvbuf,tag,stat(MPI_STATUS_SIZE)

tag = 22
sendbuf = 222

call MPI_Init(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD,numtasks,ierr)

if (rank < 2) then
color = 0
else 
color = 1
end if

call MPI_COMM_SPLIT(MPI_COMM_WORLD,color,rank,new_comm,ierr)

if (color .eq. 0) then
if (rank == 0) print*,' 0 here'
call MPI_INTERCOMM_CREATE(new_comm,0,MPI_Comm_world,2,tag,inter1,ierr)
call mpi_send(sendbuf,1,MPI_INT,2,tag,inter1,ierr)

!local_comm,local leader,peer_comm,remote leader,tag,new,ierr

else if(color .eq. 1) then
 if(rank ==2) print*,' 2 here'
call MPI_INTERCOMM_CREATE(new_comm,2,MPI_COMM_WORLD,0,tag,inter2,ierr)
call mpi_recv(recvbuf,1,MPI_INT,0,tag,inter2,stat,ierr)
print*,recvbuf
end if
end

回答1:


The communication with intercommunication is not well understood by most users, and examples are not as many as examples for other MPI operations. You can find a good explanation by following this link.

Now, there are two things to remember:

1) Communication in an inter communicator always go from one group to the other group. When sending, the rank of the destination is its the local rank in the remote group communicator. When receiving, the rank of the sender is its local rank in the remote group communicator.

2) Point to point communication (MPI_send and MPI_recv family) is between one sender and one receiver. In your case, everyone in color 0 is sending and everyone in color 1 is receiving, however, if I understood your problem, you want the process 0 of color 0 to send something to the process 0 of color 1.

The sending code should be something like this:

call MPI_COMM_RANK(inter1,irank,ierr)
if(irank==0)then
    call mpi_send(sendbuf,1,MPI_INT,0,tag,inter1,ierr)
end if

The receiving code should look like:

call MPI_COMM_RANK(inter2,irank,ierr)
if(irank==0)then
    call mpi_recv(recvbuf,1,MPI_INT,0,tag,inter2,stat,ierr)
    print*,'rec buff = ', recvbuf
end if

In the sample code, there is a new variable irank that I use to query the rank of each process in the inter-communicator; that is the rank of the process in his local communicator. So you will have two process of rank 0, one for each group, and so on.

It is important to emphasize what other commentators of your post are saying: when building a program in those modern days, use moderns constructs like use mpi instead of include 'mpif.h' see comment from Vladimir F. Another advise from your previous question was yo use rank 0 as remote leader in both case. If I combine those 2 ideas, your program can look like:

program hello
use mpi !instead of include 'mpif.h'
implicit none

    integer :: tag,ierr,rank,numtasks,color,new_comm,inter1,inter2
    integer :: sendbuf,recvbuf,stat(MPI_STATUS_SIZE)
    integer :: irank
    !
    tag = 22
    sendbuf = 222
    !
    call MPI_Init(ierr)
    call MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierr)
    call MPI_COMM_SIZE(MPI_COMM_WORLD,numtasks,ierr)
    !
    if (rank < 2) then
        color = 0
    else 
        color = 1
    end if
    !
    call MPI_COMM_SPLIT(MPI_COMM_WORLD,color,rank,new_comm,ierr)
    !
    if (color .eq. 0) then
        call MPI_INTERCOMM_CREATE(new_comm,0,MPI_Comm_world,2,tag,inter1,ierr)
    !
    call MPI_COMM_RANK(inter1,irank,ierr)
    if(irank==0)then
        call mpi_send(sendbuf,1,MPI_INT,0,tag,inter1,ierr)
    end if
    !
    else if(color .eq. 1) then
        call MPI_INTERCOMM_CREATE(new_comm,0,MPI_COMM_WORLD,0,tag,inter2,ierr)
        call MPI_COMM_RANK(inter2,irank,ierr)
        if(irank==0)then
            call mpi_recv(recvbuf,1,MPI_INT,0,tag,inter2,stat,ierr)
            if(ierr/=MPI_SUCCESS)print*,'Error in rec '
            print*,'rec buff = ', recvbuf
        end if
    end if
    !
    call MPI_finalize(ierr)
end program h


来源:https://stackoverflow.com/questions/38411583/send-and-receive-operations-between-communicators-in-mpi

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!