MPI_Isend and MPI_Irecv seem to be causing a deadlock

為{幸葍}努か 提交于 2019-12-06 14:11:37

When using MPI_Isend and MPI_Irecv you have to be sure to not modify the buffers before you wait for the request to complete, and you are definitely violating this. What if you had the recieves all go into a second matrix instead of doing it in place?

Also, global_x2 * global_y2 is your tag, but I'm not sure that it will be unique for every send-recieve pair, which could be messing things up. What happens if you switch it to sending tag (global_y2 * global_columns) + global_x2 and recieving tag (global_x2 * global_columns) + global_y2.

Edit: As for your question about output, I'm assuming you are testing this by running all your processes on the same machine and just looking at the standard output. When you do it this way, your output gets buffered oddly by the terminal, even though the printf code all executes before the barrier. There are two ways I get around this. You could either print to a separate file for each process, or you could send your output as messages to process 0 and let him do all the actual printing.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!