mpi

Python “print” not working when embedded into MPI program

核能气质少年 提交于 2021-02-04 13:51:22
问题 I have an Python 3 interpreter embedded into an C++ MPI application. This application loads a script and passes it to the interpreter. When I execute the program on 1 process without the MPI launcher (simply calling ./myprogram), the script is executed properly and its "print" statements output to the terminal. When the script has an error, I print it on the C++ side using PyErr_Print(). However when I lauch the program through mpirun (even on a single process), I don't get any output from

Array specification at (1) has more than 7 dimensions in mpif-sizeof.h

我的未来我决定 提交于 2021-01-29 15:38:09
问题 I have the following makelist file: BIN_SUFFIX = cpu OMP_FLAGS = -p OPT_FLAGS = -ofast compiler = mpifort compiler = mpif90 #used in gfortran MISC_FLAGS = -ffree-line-length-300 # CFLAGS += $(OPT_FLAGS) CFLAGS += $(OMP_FLAGS) CFLAGS += $(MISC_FLAGS) LFLAGS = $(CFLAGS) COMPILE = ${compiler} -c LINK = ${compiler} objects = Main_multiphase.o Main_singlephase.o Module.o Init_multiphase.o Init_singlephase.o Misc.o IO_multiphase.o IO_singlephase.o Kernel_multiphase.o Kernel_singlephase.o Mpi_misc.o

Is MPI_Allreduce on a structure with fields of the same type portable?

可紊 提交于 2021-01-29 08:21:11
问题 Consider something like this: typedef struct TS { double a,b,c; } S; ... S x,y; ... MPI_Allreduce(&x, &y, 3, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD); Is the above code completely portable (without using MPI_Type_struct and all; all variables in the structure are assumed to be of the same type)? Also in the case when different hardware on various nodes is used? Thanks in advance, Jac 回答1: Hristo Iliev's completely right; the C standard allows arbitrary padding between the fields. So there's no

How to send pointer in struct in MPI

谁说胖子不能爱 提交于 2021-01-29 08:07:17
问题 I have struct like this: typedef struct { int x; double *y; int **z; } ind; how could I send pointer like *y and **z via MPI to other processes? I know that many answers said that never send pointers by MPI. But if I cannot change *y to an array because it is used in other part of the main program, what should I do to transfer them through processes via MPI? Especially for **z, how should I do ? Thanks in advance! 回答1: Just following the code from the second example here, I did the following.

How to send pointer in struct in MPI

半世苍凉 提交于 2021-01-29 08:06:16
问题 I have struct like this: typedef struct { int x; double *y; int **z; } ind; how could I send pointer like *y and **z via MPI to other processes? I know that many answers said that never send pointers by MPI. But if I cannot change *y to an array because it is used in other part of the main program, what should I do to transfer them through processes via MPI? Especially for **z, how should I do ? Thanks in advance! 回答1: Just following the code from the second example here, I did the following.

MPI Fortran support through the mpi_f08 module with gfortran

拈花ヽ惹草 提交于 2021-01-29 06:09:00
问题 I have some Fortran code I would like to paralelize with MPI. Appereantly, recomended way to use MPI (MPICH, in my case) with Fortran is through mpi_f08 module (mpi-forum entry on the matter), but I have trouble making it work, since corresponding mod file is simply not created (unlike mpi.mod , which works fine, but it's not up to date with Fortran standart). This discussion left me under the impression it's because gfortran can't build the F08 bindings. Below you can see my configuration,

C/C++ MPI speedup is not as expected

此生再无相见时 提交于 2021-01-28 15:14:26
问题 I am trying to write an MPI application to speedup a math algorithm with a computer cluster. But before this I am doing some kind of benchmarking. But the first results are not as much as expected. The test application has linear speedup with 4 cores but 5,6 cores are not speeding up the application. I am doing a test with Odroid N2 platform. It has 6 cores. Nproc says there are 6 cores available. Am I missing some kind of configuration? Or is my code not prepared well enought ( it is based

C/C++ MPI speedup is not as expected

|▌冷眼眸甩不掉的悲伤 提交于 2021-01-28 15:14:13
问题 I am trying to write an MPI application to speedup a math algorithm with a computer cluster. But before this I am doing some kind of benchmarking. But the first results are not as much as expected. The test application has linear speedup with 4 cores but 5,6 cores are not speeding up the application. I am doing a test with Odroid N2 platform. It has 6 cores. Nproc says there are 6 cores available. Am I missing some kind of configuration? Or is my code not prepared well enought ( it is based

C/C++ MPI speedup is not as expected

橙三吉。 提交于 2021-01-28 15:14:00
问题 I am trying to write an MPI application to speedup a math algorithm with a computer cluster. But before this I am doing some kind of benchmarking. But the first results are not as much as expected. The test application has linear speedup with 4 cores but 5,6 cores are not speeding up the application. I am doing a test with Odroid N2 platform. It has 6 cores. Nproc says there are 6 cores available. Am I missing some kind of configuration? Or is my code not prepared well enought ( it is based

C/C++ MPI speedup is not as expected

左心房为你撑大大i 提交于 2021-01-28 15:12:06
问题 I am trying to write an MPI application to speedup a math algorithm with a computer cluster. But before this I am doing some kind of benchmarking. But the first results are not as much as expected. The test application has linear speedup with 4 cores but 5,6 cores are not speeding up the application. I am doing a test with Odroid N2 platform. It has 6 cores. Nproc says there are 6 cores available. Am I missing some kind of configuration? Or is my code not prepared well enought ( it is based