I am working with the MPI interface. I want to split a matrix (by rows) and distribute the parts among every process.
For example, I have this 7x7 square matrix M.>
MPI_Scatterv
needs a pointer to the data and the data should be contiguous in memory. Your program is fine on the second part, but MPI_Scatterv
receives a pointer to pointers to data. So it would be a good thing to change for :
MPI_Scatterv(&m[0][0], sendcounts, displs, MPI_DOUBLE, &mParts[0][0], sendcounts[myrank], MPI_DOUBLE, root, MPI_COMM_WORLD);
There are also a couple of things to change for sendcounts
and displs
: to go 2D, these counts should be multiplied by n
. And the count of receive in MPI_Scatterv
is not rows
anymore, but sendcouts[myrank]
.
Here is the final code :
#include
#include
#include
#define BLOCK_LOW(id,p,n) ((id)*(n)/(p))
#define BLOCK_HIGH(id,p,n) ((id+1)*(n)/(p) - 1)
#define BLOCK_SIZE(id,p,n) ((id+1)*(n)/(p) - (id)*(n)/(p))
#define BLOCK_OWNER(index,p,n) (((p)*((index)+1)-1)/(n))
void **matrix_create(size_t m, size_t n, size_t size) {
size_t i;
void **p= (void **) malloc(m*n*size+ m*sizeof(void *));
char *c= (char*) (p+m);
for(i=0; i
If you want to know more about 2D arrays and MPI, look here
Look also at the DMDA structure of the PETSc library here and there