MPI - Bsend usage

做~自己de王妃 提交于 2020-01-14 03:02:06

问题


Is MPI_Bsend good when I want to free resources exactly after async Send? Will this :

MPI_Bsend(&array[0],...)
delete[] array;

prevent me from deleting memory that I want to send ( the problem is, when appropriate recv will be on, the array may be already deleted)?

UPD:

void RectMPIAngleFiller::setglobalFillerbounds1() {

    int_t SIZE = getSolver()->getNumInterpolators() * procnums;
    int_t gridnums = getSolver()->getNumGrids();
    if (layer == 1) {
      if (local_rank == 0) {
    MPI_Isend(&rank_size, 1, MPI_INT, 0, gridnum, MPI_COMM_WORLD);
      }
    } else if (layer == 0) {
      int_t fillernumber = getSolver()->getNumInterpolators();
      int_t local_fillernum = fillernum % fillernumber;
      if (local_rank == 0 && local_fillernum == 0) {
    int_t * incomeSizes = new incomeSizes[gridnums];
    incomeSizes[gridnum] = getSolver()->getNumInterpolators();

    for ( int_t i = 0; i < gridnums; i++) {
      if (i != gridnum)
        MPI_Irecv(&incomeSizes[i], 1, MPI_INT, MPI_ANY_SOURCE, i, MPI_COMM_WORLD, &request);
    }

      }
   }    
}

I have for example now such a function(it may be not correct now), it collects sizes from many processes, which may be the same but running on different class instances, that is why everything is with Send.

This function runs in outer loop for every instance and I want it to be completed after this whole cycle.

Now it only receive sizes, I don't want this and want to delete some inner arrays and resize them according to received sizes in the same function. If I have very large arrays, inner buffer in Isend is too small to store all info.


回答1:


The distinction between buffered and non-blocking sends is a bit subtle in MPI. In practice they can both be used to avoid deadlock because both routines return control to the user before the message has been delivered (or rather they will always return to the user but at that point there is no guarantee the message has been delivered). In practice this means neither needs to wait for a matching receive to be posted which helps avoid deadlock.

However, MPI_Bsend guarantees that the data has been copied to a buffer. It is up to the user to ensure that they have provided enough memory, via MPI_Buffer_attach, for all outstanding messages to be buffered. Whether this is one message or many more depends on the logic of your program.

MPI_Isend does not guarantee that the message has been copied to a buffer. The mental model is that the send has been deferred until later - you've asked MPI to send the message some time in the future when it is convenient. You have to wait on the associated request to ensure that MPI_Send has completed.

  • When MPI_Bsend returns it is safe to deallocate the send buffer as it is guaranteed to have been copied to the user-supplied buffer.
  • When MPI_Isend returns it is not safe to deallocate the send buffer.

  • When MPI_Wait(&request, &status) returns it is safe to deallocate the send buffer. This is because either the data has been copied into a buffer (a system buffer, not the one you provide via Buffer_attach) or because it has been safely delivered to a matching MPI_Recv.

MPI is free to choose whether or not it buffers MPI_Send. In practice, small messages are buffered but large ones are not.

Although MPI_Ibsend exists for completeness, I can't think of a real use case. In principle, it could return before the message has been copied to the user-supplied buffer so you can't free the send buffer until after the wait. So you could overlap user code with the copy? Seems a bit pointless in practice.




回答2:


With MPI_Isend any modifications (or freeing) of input buffer are prohibited in documentation: https://www.open-mpi.org/doc/v2.0/man3/MPI_Isend.3.php

A nonblocking send call indicates that the system may start copying data out of the send buffer. The sender should not modify any part of the send buffer after a nonblocking send operation is called, until the send completes. A send request can be determined being completed by calling the MPI_Wait, MPI_Waitany, MPI_Test, or MPI_Testany

Documentation of Bsend has not such prohibition (but MPI_Buffer_attach should be called before with large enough size): https://www.open-mpi.org/doc/v2.0/man3/MPI_Bsend.3.php

MPI_Bsend performs a buffered-mode, blocking send. ... buffer space is not available for reuse by subsequent MPI_Bsends unless you are certain that the message has been received (not just that it should have been received).

If you want to combine buffered and async, try MPI_Ibsend https://www.open-mpi.org/doc/v2.0/man3/MPI_Ibsend.3.php but it has note with the same prohibition as it was in Isend:

MPI_Ibsend - Starts a nonblocking buffered send. ... A nonblocking send call indicates that the system may start copying data out of the send buffer. The sender should not modify any part of the send buffer after a nonblocking send operation is called, until the send completes.

Full list of sending modes is for example in http://www.mcs.anl.gov/research/projects/mpi/sendmode.html; without R* variants the list is:

  • MPI_Send - MPI_Send will not return until you can use the send buffer. It may or may not block (it is allowed to buffer, either on the sender or receiver side, or to wait for the matching receive).
  • MPI_Bsend - May buffer; returns immediately and you can use the send buffer. A late add-on to the MPI specification. Should be used only when absolutely necessary.
  • MPI_Ssend - will not return until matching receive posted
  • MPI_Isend - Nonblocking send. But not necessarily asynchronous. You can NOT reuse the send buffer until either a successful, wait/test or you KNOW that the message has been received (see MPI_Request_free). Note also that while the I refers to immediate, there is no performance requirement on MPI_Isend. An immediate send must return to the user without requiring a matching receive at the destination. An implementation is free to send the data to the destination before returning, as long as the send call does not block waiting for a matching receive. Different strategies of when to send the data offer different performance advantages and disadvantages that will depend on the application.
  • MPI_Ibsend - buffered nonblocking
  • MPI_Issend - Synchronous nonblocking. Note that a Wait/Test will complete only when the matching receive is posted.


来源:https://stackoverflow.com/questions/42606441/mpi-bsend-usage

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!