mpi

C/C++ MPI speedup is not as expected

若如初见. 提交于 2021-01-28 15:11:17
问题 I am trying to write an MPI application to speedup a math algorithm with a computer cluster. But before this I am doing some kind of benchmarking. But the first results are not as much as expected. The test application has linear speedup with 4 cores but 5,6 cores are not speeding up the application. I am doing a test with Odroid N2 platform. It has 6 cores. Nproc says there are 6 cores available. Am I missing some kind of configuration? Or is my code not prepared well enought ( it is based

C/C++ MPI speedup is not as expected

不想你离开。 提交于 2021-01-28 15:09:08
问题 I am trying to write an MPI application to speedup a math algorithm with a computer cluster. But before this I am doing some kind of benchmarking. But the first results are not as much as expected. The test application has linear speedup with 4 cores but 5,6 cores are not speeding up the application. I am doing a test with Odroid N2 platform. It has 6 cores. Nproc says there are 6 cores available. Am I missing some kind of configuration? Or is my code not prepared well enought ( it is based

Segmentation Fault using MPI_Sendrecv with a 2D contiguous array

六月ゝ 毕业季﹏ 提交于 2021-01-28 12:23:12
问题 my problem is quite simple. When using MPI_Sendrecv, its generates systematically a segfault. I had the same problem earlier with the use of 2D array and a basic MPI_Send but finally solved it. As I tried the same solution that work in the last case, this did not change anything. Thus I'm asking help ! So basically, I allocate all my matrix by this code : double** allocateMatrix(int rows, int cols) { double **M; // Row pointer double *Mdata; // Where data will be actually storde M = calloc

MPI send-receive issue in Fortran

一世执手 提交于 2021-01-28 09:41:52
问题 I am currently starting to develop a parallel code for scientific applications. I have to exchange some buffers from p0 to p1 and from p1 to p0 (I am creating ghost point between processors boundaries). The error can be summarized by this sample code: program test use mpi implicit none integer id, ids, idr, ierr, tag, istat(MPI_STATUS_SIZE) real sbuf, rbuf call mpi_init(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD,id,ierr) if(id.eq.0) then ids=0 idr=1 sbuf=1.5 tag=id else ids=1 idr=0 sbuf=3.5 tag

Make use of all CPUs on SLURM

空扰寡人 提交于 2021-01-27 19:52:00
问题 I would like to run a job on the cluster. There are a different number of CPUs on different nodes and I have no idea which nodes will be assigned to me. What are the proper options so that the job can create as many tasks as CPUs on all nodes? #!/bin/bash -l #SBATCH -p normal #SBATCH -N 4 #SBATCH -t 96:00:00 srun -n 128 ./run 回答1: One dirty hack to achieve the objective is using the environment variables provided by the SLURM. For a sample sbatch file: #!/bin/bash #SBATCH --job-name=test

Kill an mpi process

狂风中的少年 提交于 2021-01-27 14:51:37
问题 I would like to know if there is a way that an MPI process send a kill signal to another MPI process? Or differently, is there a way to exit from an MPI environment graciously, when one of the process is still active? (i.e. mpi_abort() prints an error message). Thanks 回答1: No, this is not possible within an MPI application using the MPI library. Individual processes would not be aware of the location of the other processes, nor of the process IDs of the other processes - and there is nothing

Kill an mpi process

只谈情不闲聊 提交于 2021-01-27 14:40:35
问题 I would like to know if there is a way that an MPI process send a kill signal to another MPI process? Or differently, is there a way to exit from an MPI environment graciously, when one of the process is still active? (i.e. mpi_abort() prints an error message). Thanks 回答1: No, this is not possible within an MPI application using the MPI library. Individual processes would not be aware of the location of the other processes, nor of the process IDs of the other processes - and there is nothing

Receive multiple send commands using mpi4py

≯℡__Kan透↙ 提交于 2021-01-27 07:07:29
问题 How can I modify the following code (adapted from http://materials.jeremybejarano.com/MPIwithPython/pointToPoint.html) so that every comm.Send instance is received by root = 0 and the output printed. At the moment, only the first send command is received. #passRandomDraw.py import numpy from mpi4py import MPI from mpi4py.MPI import ANY_SOURCE import numpy as np comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0: randNum = numpy.zeros(1) print "Process before receiving random numbers"

parallel write to different groups with h5py

拟墨画扇 提交于 2020-12-15 06:18:41
问题 I'm trying to use parallel h5py to create an independent group for each process and fill each group with some data.. what happens is that only one group gets created and filled with data. This is the program: from mpi4py import MPI import h5py rank = MPI.COMM_WORLD.Get_rank() f = h5py.File('parallel_test.hdf5', 'w', driver='mpio', comm=MPI.COMM_WORLD) data = range(1000) dset = f.create_dataset(str(rank), data=data) f.close() Any thoughts on what is going wrong here? Thanks alot 回答1: Ok, so as