Create and communicate an “array of structs” using MPI Derived datatypes

依然范特西╮ 提交于 2020-01-07 05:07:28

问题


I am trying to program an MPI_Alltoallv using an MPI Derived datatype using MPI_Type_create_struct. I could not find any examples solving this particular problem. Most examples like this perform communication(Send/Recv) using a single struct member, whereas I am targeting an array of structs. Following is a simpler test code that attempts a MPI_Sendrecv operation on an array of structs created using DDT:

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <stddef.h>

typedef struct sample{
  char str[12];
  int  count;
}my_struct;

int main(int argc, char **argv)
{
    int rank, count;
    my_struct *sbuf = (my_struct *) calloc (sizeof(my_struct),5);
    my_struct *rbuf = (my_struct *) calloc (sizeof(my_struct),5);
    int blens[2];
    MPI_Aint displs[2];
    MPI_Aint baseaddr, addr1, addr2;
    MPI_Datatype types[2];
    MPI_Datatype contigs[5];
    MPI_Status status;

    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    strcpy(sbuf[0].str,"ACTGCCAATTCG");
    sbuf[0].count = 10;
    strcpy(sbuf[1].str,"ACTGCCCATACG");
    sbuf[1].count = 5;
    strcpy(sbuf[2].str,"ACTGCCAATTTT");
    sbuf[2].count = 6;
    strcpy(sbuf[3].str,"CCTCCCAATTCG");
    sbuf[3].count = 12;
    strcpy(sbuf[4].str,"ACTATGAATTCG");
    sbuf[4].count = 8;

    blens[0] = 12; blens[1] = 1;
    types[0]  = MPI_CHAR; types[1]  = MPI_INT;
    for (int i=0; i<5; i++)
    {
       MPI_Get_address ( &sbuf[i], &baseaddr);
       MPI_Get_address ( &sbuf[i].str, &addr1);
       MPI_Get_address ( &sbuf[i].count, &addr2);
       displs[0] = addr1 - baseaddr;
       displs[1] = addr2 - baseaddr;

       MPI_Type_create_struct(2, blens, displs, types, &contigs[i]);
       MPI_Type_commit(&contigs[i]);
      }

    /* send to ourself */
     MPI_Sendrecv(sbuf, 5, contigs, 0, 0,
             rbuf, 5, contigs, 0, 0,
             MPI_COMM_SELF, &status);

     for (int i=0; i<5; i++)
          MPI_Type_free(&contigs[i]);

     MPI_Finalize();

     return 0;
 }

I get the following warning at compile time:

    coll.c(53): warning #810: conversion from "MPI_Datatype={int} *" to "MPI_Datatype={int}" may lose significant bits
       MPI_Sendrecv(sbuf, 5, contigs, 0, 0,
                             ^

    coll.c(54): warning #810: conversion from "MPI_Datatype={int} *" to "MPI_Datatype={int}" may lose significant bits
               rbuf, 5, contigs, 0, 0,

And observe the following error across all processes:

    Rank 0 [Thu Jun 16 16:19:24 2016] [c0-0c2s9n1] Fatal error in MPI_Sendrecv: Invalid datatype, error stack:
    MPI_Sendrecv(232): MPI_Sendrecv(sbuf=0x9ac440, scount=5, INVALID DATATYPE, dest=0, stag=0, rbuf=0x9ac4a0, rcount=5, INVALID DATATYPE, src=0, rtag=0, MPI_COMM_SELF, status=0x7fffffff6780) failed

Not sure what I am doing wrong. Do i need to further use "MPI_Type_create_resized " to register the "extent"? If so, an example quoting the above scenario would really help.

Also my main goal is to perform "MPI_Alltoallv" using a similar array of structs (of size ~ several thousands). Hopefully if I can get the SendRecv to work I can move on to "MPI_Alltoallv".

Any help would be highly appreciated.


回答1:


The sendtype and recvtype parameters expect a parameter of type MPI_Datatype. What you're passing in is an array of these, i.e. a MPI_Datatype *.

You can only use one of these array elements at a time to pass to this function.



来源:https://stackoverflow.com/questions/37871068/create-and-communicate-an-array-of-structs-using-mpi-derived-datatypes

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!