MPI Slave processes hang when there is no more work

前端 未结 1 1451
慢半拍i
慢半拍i 2021-01-03 10:51

I have a serial C++ program that I wish to parallelize. I know the basics of MPI, MPI_Send, MPI_Recv, etc. Basically, I have a data generation algo

相关标签:
1条回答
  • 2021-01-03 11:12

    The usual practice is to send to all worker processes an empty message with a special tag that signals them to exit the infinite processing loop. Let's say this tag is 42. You would do something like that in the worker loop:

    while (true) {
      int value;
      MPI_Status status;
      MPI_Recv(&value, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
      if (status.MPI_TAG == 42) {
        printf("Process %d exiting work loop.\n", rank);
        break;
      }
      printf("** Received %d from process %d.\n", value, status.MPI_SOURCE);
      printf("Process %d computed %d.\n", rank, fibonnaci(2 * (value + 10)));
    }
    

    The manager process would do something like this after the generator loop:

    for (int i = 1; i < num_procs; i++)
      MPI_Send(&i, 0, MPI_INT, i, 42, MPI_COMM_WORLD);
    

    Regarding your next question. Using MPI_Isend() in the master process would deserialise the execution and increase the performance. The truth however is that you are sending very small messages and those are typically internally buffered (WARNING - implementation dependent!) so your MPI_Send() is actually non-blocking and you already have non-serial execution. MPI_Isend() returns an MPI_Request handle that you need to take care of later. You could either wait for it to finish with MPI_Wait() or MPI_Waitall() but you could also just call MPI_Request_free() on it and it will be automatically freed when the operation is over. This is usually done when you'd like to send many messages asynchronously and would not care on when the sends will be completed, but it's a bad practice nevertheless since having a large number of outstanding requests can consume lots of precious memory. As for the worker processes - they need the data in order to proceed with the computation so using MPI_Irecv() is not necessary.

    Welcome to the wonderful world of MPI programming!

    0 讨论(0)
提交回复
热议问题