I don\'t get why MPI_Reduce() does a segmentation fault as soon as I use a custom MPI datatype which contains dynamically allocated arrays. Does anyone know ? The following
Let's look at your struct:
typedef struct mytype_s
{
int c[2];
double a;
double b;
double *d;
} MyType;
...
MyType mt;
mt.d = calloc(10,sizeof *mt.d);
And your description of this struct as an MPI type:
displacements[0] = 0;
MPI_Address(&mt->c, &start_address);
MPI_Address(&mt->a, &address);
displacements[1] = address - start_address;
MPI_Address(&mt->b,&address);
displacements[2] = address-start_address;
MPI_Address(&mt->d, &address);
displacements[3] = address-start_address;
MPI_Type_struct(4,block_lengths, displacements,typelist,MyTypeMPI);
The problem is, this MPI struct is only ever going to apply to the one instance of the structure you've used in the definition here. You have no control at all of where calloc()
decides to grab memory from; it could be anywhere in virtual memory. The next one of these type you create and instantiate, the displacement for your d
array will be completely different; and even using the same struct, if you change the size of the array with realloc()
of the current mt
, it could end up having a different displacement.
So when you send, receive, reduce, or anything else with one of these types, the MPI library will dutifully go to a probably meaningless displacement, and try to read or write from there, and that'll likely cause a segfault.
Note that this isn't an MPI thing; in using any low-level communications library, or for that matter trying to write out/read in from disk, you'd have the same problem.
Your options include manually "marshalling" the array into a message, either with the other fields or without; or adding some predictability to where d is located such as by defining it to be an array of some defined maximum size.