mpi

How to set environment variables on compute nodes in an MPI job

你。 提交于 2020-07-18 10:31:37
问题 I don't understand how the environment is set on compute nodes when running with MPI under a scheduler. I do: mpirun -np 1 --hostfile ./hostfile foo.sh with foo.sh : #!/usr/bin/env zsh echo $LD_LIBRARY_PATH Then I do not recover the LD_LIBRARY_PATH I have got in an interactive shell... What are the initialization files that are executed/sourced at connection with MPI? note: I am under zsh, and I tried to put things in .zprofile or .zshenv instead of .zshrc, but it doesn't seem to make a

mpi4py: substantial slowdown by idle cores

半城伤御伤魂 提交于 2020-07-10 03:50:51
问题 I have a python script that recruits MPI for parallel calculations. The scheme of the calculations is following: data processing round 1 - data exchange between processes - data processing round 2. I have a 16 logical core machine (2 x Intel Xeon E5520 2.27GHz). For a reason round 1 cannot be run in parallel. Therefore, 15 cores stay idle. However, despite this fact calculations experience more than 2-fold slowdown. The problem is illustrated by this script (saved as test.py ): from mpi4py

mpi4py: substantial slowdown by idle cores

久未见 提交于 2020-07-10 03:50:06
问题 I have a python script that recruits MPI for parallel calculations. The scheme of the calculations is following: data processing round 1 - data exchange between processes - data processing round 2. I have a 16 logical core machine (2 x Intel Xeon E5520 2.27GHz). For a reason round 1 cannot be run in parallel. Therefore, 15 cores stay idle. However, despite this fact calculations experience more than 2-fold slowdown. The problem is illustrated by this script (saved as test.py ): from mpi4py

Running unit tests with mpirun using ant

陌路散爱 提交于 2020-06-26 13:57:32
问题 I'm trying to run my unit tests through mpirun using ant. I have specified the task as: <target name="unitTest" depends="buildUnitTest"> <mkdir dir="reports"/> <junit fork="yes" jvm="mpirun java" printsummary="yes" haltonfailure="yes"> <classpath> <pathelement location="./bin"/> <pathelement location="/usr/share/java/junit4.jar"/> </classpath> <jvmarg value="-DDIM=3"/> <jvmarg value="-ea"/> <formatter type="plain"/> <batchtest todir="reports"> <fileset dir="test"> <include name="haparanda

Running unit tests with mpirun using ant

大城市里の小女人 提交于 2020-06-26 13:57:28
问题 I'm trying to run my unit tests through mpirun using ant. I have specified the task as: <target name="unitTest" depends="buildUnitTest"> <mkdir dir="reports"/> <junit fork="yes" jvm="mpirun java" printsummary="yes" haltonfailure="yes"> <classpath> <pathelement location="./bin"/> <pathelement location="/usr/share/java/junit4.jar"/> </classpath> <jvmarg value="-DDIM=3"/> <jvmarg value="-ea"/> <formatter type="plain"/> <batchtest todir="reports"> <fileset dir="test"> <include name="haparanda

MPI, python, Scatterv, and overlapping data

跟風遠走 提交于 2020-05-29 10:20:07
问题 The MPI standard, 3.0, says about mpi_scatterv: The specification of counts, types, and displacements should not cause any location on the root to be read more than once." However, my testing of mpi4py in python with the code below does not indicate that there is a problem with reading data from root more than once: import numpy as np from sharethewealth import sharethewealth comm = MPI.COMM_WORLD nprocs = comm.Get_size() rank = comm.Get_rank() counts = [16, 17, 16, 16, 16, 16, 15] displs =

Why don't files open when I add MPI?

大城市里の小女人 提交于 2020-05-17 07:46:26
问题 When my program works without MPI, then everything is fine with opening files, but when I add MPI, the files do not open. Why is that? My code: void fileEntry(string path, int n) { ofstream fout; fout.open(path); if (!fout.is_open()) { cout << "File open error"; } else { for(int i = 0; i < n; i++) { for(int j = 0; j < n; j++) { fout << rand() % 100 << " "; } fout << "\n"; } } fout.close(); } int main(int argc, char** argv) { MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &WORLD_RANK);

Inconsistent rows allocation in scalapack

半城伤御伤魂 提交于 2020-05-17 07:07:21
问题 Consider the following simple fortran program program test_vec_allocation use mpi implicit none integer(kind=8) :: N ! =========================BLACS and MPI======================= integer :: ierr, size, rank,dims(2) ! ------------------------------------------------------------- integer, parameter :: block_size = 100 integer :: context, nprow, npcol, local_nprow, local_npcol integer :: numroc, indxl2g, descmat(9),descvec(9) integer :: mloc_mat ,nloc_mat ,mloc_vec ,nloc_vec call blacs_pinfo

Efficient way to find norm of distributed vector in SCALAPACK

本秂侑毒 提交于 2020-05-17 06:12:05
问题 consider the following piece of code using scalapack: ! if (norm2(h-x0) < tol) then tmp_vec = h - x0 call pdnrm2(N,norm,tmp_vec,1,1,descvec,1) if (norm < tol) then x=h converged = .true. exit endif s = r0 - alpha*v call pdgemv('N', N, N, 1.0, A, 1, 1, descmat, s, 1, 1, descvec, 1, 0.0,t, 1, 1, descvec, 1) its part of an iterative solver that i was trying, problem is that if my processor grid is two dimensional, my vectors do not have any elements on those procs, hence dnrm2 yields zero or the

MPI_Comm_split explanation

不想你离开。 提交于 2020-05-13 07:36:57
问题 Can someone explain and tell me more about MPI_Comm_split communicator? MPI_Comm_split(MPI_COMM_WORLD, my_row, my_rank,&my_row_comm); This is just example i met by reading some basic documentations. Maybe someone could tell me how this communicator is working? 回答1: Just to begin with, let's have a look at the man page: MPI_Comm_split(3) MPI MPI_Comm_split(3) NAME MPI_Comm_split - Creates new communicators based on colors and keys SYNOPSIS int MPI_Comm_split(MPI_Comm comm, int color, int key,