MPI Slave processes hang when there is no more work

只谈情不闲聊 提交于 2019-11-30 18:50:05

问题


I have a serial C++ program that I wish to parallelize. I know the basics of MPI, MPI_Send, MPI_Recv, etc. Basically, I have a data generation algorithm that runs significantly faster than the data processing algorithm. Currently they run in series, but I was thinking that running the data generation in the root process, having the data processing done on the slave processes, and sending a message from the root to a slave containing the data to be processed. This way, each slave processes a data set and then waits for its next data set.

The problem is that, once the root process is done generating data, the program hangs because the slaves are waiting for more.

This is an example of the problem:

#include "mpi.h"

#include <cassert>
#include <cstdio>

class Generator {
  public:
    Generator(int min, int max) : value(min - 1), max(max) {}
    bool NextValue() {
      ++value;
      return value < max;
    }
    int Value() { return value; }
  private:
    int value, max;

    Generator() {}
    Generator(const Generator &other) {}
    Generator &operator=(const Generator &other) { return *this; }
};

long fibonnaci(int n) {
  assert(n > 0);
  if (n == 1 || n == 2) return 1;
  return fibonnaci(n-1) + fibonnaci(n-2);
}

int main(int argc, char **argv) {
  MPI_Init(&argc, &argv);

  int rank, num_procs;
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  MPI_Comm_size(MPI_COMM_WORLD, &num_procs);

  if (rank == 0) {
    Generator generator(1, 2 * num_procs);
    int proc = 1;
    while (generator.NextValue()) {
      int value = generator.Value();
      MPI_Send(&value, 1, MPI_INT, proc, 73, MPI_COMM_WORLD);
      printf("** Sent %d to process %d.\n", value, proc);
      proc = proc % (num_procs - 1) + 1;
    }
  } else {
    while (true) {
      int value;
      MPI_Status status;
      MPI_Recv(&value, 1, MPI_INT, 0, 73, MPI_COMM_WORLD, &status);
      printf("** Received %d from process %d.\n", value, status.MPI_SOURCE);
      printf("Process %d computed %d.\n", rank, fibonnaci(2 * (value + 10)));
    }
  }

  MPI_Finalize();
  return 0;
}

Obviously not everything above is "good practice", but it is sufficient to get the point across.

If I remove the while(true) from the slave processes, then the program exits when each of the slaves have exited. I would like the program to exit only after the root process has done its job AND all of the slaves have processed everything that has been sent.

If I knew how many data sets would be generated, I could have that many process running and everything would exit nicely, but that isn't the case here.

Any suggestions? Is there anything in the API that will do this? Could this be solved better with a better topology? Would MPI_Isend or MPI_IRecv do this better? I am fairly new to MPI so bear with me.

Thanks


回答1:


The usual practice is to send to all worker processes an empty message with a special tag that signals them to exit the infinite processing loop. Let's say this tag is 42. You would do something like that in the worker loop:

while (true) {
  int value;
  MPI_Status status;
  MPI_Recv(&value, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
  if (status.MPI_TAG == 42) {
    printf("Process %d exiting work loop.\n", rank);
    break;
  }
  printf("** Received %d from process %d.\n", value, status.MPI_SOURCE);
  printf("Process %d computed %d.\n", rank, fibonnaci(2 * (value + 10)));
}

The manager process would do something like this after the generator loop:

for (int i = 1; i < num_procs; i++)
  MPI_Send(&i, 0, MPI_INT, i, 42, MPI_COMM_WORLD);

Regarding your next question. Using MPI_Isend() in the master process would deserialise the execution and increase the performance. The truth however is that you are sending very small messages and those are typically internally buffered (WARNING - implementation dependent!) so your MPI_Send() is actually non-blocking and you already have non-serial execution. MPI_Isend() returns an MPI_Request handle that you need to take care of later. You could either wait for it to finish with MPI_Wait() or MPI_Waitall() but you could also just call MPI_Request_free() on it and it will be automatically freed when the operation is over. This is usually done when you'd like to send many messages asynchronously and would not care on when the sends will be completed, but it's a bad practice nevertheless since having a large number of outstanding requests can consume lots of precious memory. As for the worker processes - they need the data in order to proceed with the computation so using MPI_Irecv() is not necessary.

Welcome to the wonderful world of MPI programming!



来源:https://stackoverflow.com/questions/10490983/mpi-slave-processes-hang-when-there-is-no-more-work

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!