openmpi

Error: libtool - while compiling an MPI program

旧街凉风 提交于 2019-12-07 14:46:01
问题 I'm using OpenSuse Leap and I installed openMPI thought YaST. Running a which mpirun command I get /usr/lib64/mpi/gcc/openmpi/bin/mpirun and running which mpicc i get /usr/bin/mpicc . How to make sure first that OpenMPI is correctly installed? Second, I have a simple hello world I am process X program and running mpicc hello.c I get this output gcc: error: libtool:: No such file or directory gcc: error: link:: No such file or directory mpicc: No such file or directory Also, I installed

An “atomic” call to cout in MPI

被刻印的时光 ゝ 提交于 2019-12-07 06:24:34
问题 I am interested in whether there is a command or a technique within OpenMPI to have an atomic call to write to stdout (or, for that matter, any stream). What I have noticed is that during the execution of MPI programs, calls to write to cout (or other streams) can become confusing, as each proc may write whenever it gets to a certain section of code. When reporting results, a line can be written to by several procs, confusing the issue. So 2 different procs might do something like this: /

Can you transpose array when sending using MPI_Type_create_subarray?

江枫思渺然 提交于 2019-12-07 04:01:24
问题 I'm trying to transpose a matrix using MPI in C. Each process has a square submatrix, and I want to send that to the right process (the 'opposite' one on the grid), transposing it as part of the communication. I'm using MPI_Type_create_subarray which has an argument for the order, either MPI_ORDER_C or MPI_ORDER_FORTRAN for row-major and column-major respectively. I thought that if I sent as one of these, and received as the other, then my matrix would be transposed as part of the

MPI_Isend and MPI_Irecv seem to be causing a deadlock

為{幸葍}努か 提交于 2019-12-06 14:11:37
I'm using non-blocking communication in MPI to send various messages between processes. However, I appear to be getting a deadlock. I have used PADB ( see here ) to look at the message queues and have got the following output: 1:msg12: Operation 1 (pending_receive) status 0 (pending) 1:msg12: Rank local 4 global 4 1:msg12: Size desired 4 1:msg12: tag_wild 0 1:msg12: Tag desired 16 1:msg12: system_buffer 0 1:msg12: Buffer 0xcaad32c 1:msg12: 'Receive: 0xcac3c80' 1:msg12: 'Data: 4 * MPI_FLOAT' -- 1:msg32: Operation 0 (pending_send) status 2 (complete) 1:msg32: Rank local 4 global 4 1:msg32:

How to use mpirun to use different CPU cores for different programs?

左心房为你撑大大i 提交于 2019-12-06 12:46:14
问题 I have a virtual machine with 32 cores. I am running some simulations for which I need to utilize 16 cores at one time. I use the below command to run a job on 16 cores : mpirun -n 16 program_name args > log.out 2>&1 This program runs on 16 cores. Now if I want to run the same programs on the rest of the cores, with different arguments, I use the same command like mpirun -n 8 program_name diff_args > log_1.out 2>&1 The second process utilizes the same 16 cores that were utilized earlier. How

MPI BCast (broadcast) of a std::vector of structs

强颜欢笑 提交于 2019-12-06 08:07:44
问题 I've got a question regarding passing of a std::vector of structs via MPI. First off, details. I'm using OpenMPI 1.4.3 (MPI-2 compliant) with gcc. Note that I can't use boost MPI or OOMPI -- I'm bound to using this version. I've got a struct to aggregate some data: struct Delta { Delta() : dX(0.0), dY(0.0), dZ(0.0) {}; Delta(double dx, double dy, double dz) : dX(dx), dY(dy), dZ(dz) {}; Delta(const Delta& rhs) : dX(rhs.dX), dY(rhs.dY), dZ(rhs.dZ) {}; double dX; double dY; double dZ; }; typedef

Installing Lightgbm on Mac with OpenMP dependency

随声附和 提交于 2019-12-06 06:25:21
I'm new to python and would like to install lightgbm on my macbook. I did a pip install lightgbm and it said installation successful. However when I try to import that into my notebook I get the following error message: ../anaconda/envs/python3/lib/python3.6/ctypes/__init__.py in __init__(self, name, mode, handle, use_errno, use_last_error) 342 343 if handle is None: --> 344 self._handle = _dlopen(self._name, mode) 345 else: 346 self._handle = handle OSError: dlopen(../anaconda/envs/python3/lib/python3.6/site-packages/lightgbm/lib_lightgbm.so, 6): Library not loaded: /usr/local/opt/gcc/lib/gcc

MPI Non-blocking Irecv didn't receive data?

落爺英雄遲暮 提交于 2019-12-06 04:38:42
I use MPI non-blocking communication(MPI_Irecv, MP_Isend) to monitor the slaves' idle states, the code is like bellow. rank 0: int dest = -1; while( dest <= 0){ int i; for(i=1;i<=slaves_num;i++){ printf("slave %d, now is %d \n",i,idle_node[i]); if (idle_node[i]== 1) { idle_node[i] = 0; dest = i; break; } } if(dest <= 0){ MPI_Irecv(&idle_node[1],1,MPI_INT,1,MSG_IDLE,MPI_COMM_WORLD,&request); MPI_Irecv(&idle_node[2],1,MPI_INT,2,MSG_IDLE,MPI_COMM_WORLD,&request); MPI_Irecv(&idle_node[3],1,MPI_INT,3,MSG_IDLE,MPI_COMM_WORLD,&request); // MPI_Wait(&request,&status); } usleep(100000); } idle_node

Unable to use all cores with mpirun

僤鯓⒐⒋嵵緔 提交于 2019-12-06 04:17:46
问题 I'm testing a simple MPI program on my desktop (Ubuntu LTS 16.04/ Intel® Core™ i3-6100U CPU @ 2.30GHz × 4/ gcc 4.8.5 /OpenMPI 3.0.0) and mpirun won't let me use all of the cores on my machine (4). When I run: $ mpirun -n 4 ./test2 I get the following error: -------------------------------------------------------------------------- There are not enough slots available in the system to satisfy the 4 slots that were requested by the application: ./test2 Either request fewer slots for your

mpi4py: close MPI Spawn?

北城以北 提交于 2019-12-06 03:24:56
I have some python code in which I very often Spawn multiple processes. I get an error: ORTE_ERROR_LOG: The system limit on number of pipes a process can open was reached in file odls_default_module.c at line 809 My code roughly looks like this import mpi4py comm = MPI.COMM_WORLD ... icomm = MPI.COMM_SELF.Spawn(sys.executable,args=["front_process.py",str(rank)],maxprocs=no_fronts) ... message = icomm.recv(source=MPI.ANY_SOURCE,tag=21) ... icomm.Free() The Spawn command is called very often and I think that they remain "open" after I am finished despite giving the icomm.Free() command. How do I