openmpi

How to force OpenMPI to use GCC instead of ICC? Is recompiling OpenMPI necessary?

与世无争的帅哥 提交于 2019-12-06 01:30:04
问题 I have a C-code for parallel computing written for gcc , and I want to compile it on a cluster, which apparently uses icc via mpicc . Correcting the code to be icc -friendly seems to be too time-demanding, so I wonder if I can ask OpenMPI to use gcc instead. I don't have the admin rights on that cluster, and I would actually prefer to do not mess the original configuration. If it is not possible to set in e.g. Makefile , then I could hopefully compile OpenMPI in my home directory, but I need

OpenMPI MPI_Barrier problems

狂风中的少年 提交于 2019-12-05 23:34:10
问题 I having some synchronization issues using the OpenMPI implementation of MPI_Barrier: int rank; int nprocs; int rc = MPI_Init(&argc, &argv); if(rc != MPI_SUCCESS) { fprintf(stderr, "Unable to set up MPI"); MPI_Abort(MPI_COMM_WORLD, rc); } MPI_Comm_size(MPI_COMM_WORLD, &nprocs); MPI_Comm_rank(MPI_COMM_WORLD, &rank); printf("P%d\n", rank); fflush(stdout); MPI_Barrier(MPI_COMM_WORLD); printf("P%d again\n", rank); MPI_Finalize(); for mpirun -n 2 ./a.out output should be: P0 P1 ... output is

Error: libtool - while compiling an MPI program

柔情痞子 提交于 2019-12-05 18:30:01
I'm using OpenSuse Leap and I installed openMPI thought YaST. Running a which mpirun command I get /usr/lib64/mpi/gcc/openmpi/bin/mpirun and running which mpicc i get /usr/bin/mpicc . How to make sure first that OpenMPI is correctly installed? Second, I have a simple hello world I am process X program and running mpicc hello.c I get this output gcc: error: libtool:: No such file or directory gcc: error: link:: No such file or directory mpicc: No such file or directory Also, I installed Eclipse for Parallel Application and used a build-in example and it gives me this output at build make all

Can you transpose array when sending using MPI_Type_create_subarray?

て烟熏妆下的殇ゞ 提交于 2019-12-05 08:00:06
I'm trying to transpose a matrix using MPI in C. Each process has a square submatrix, and I want to send that to the right process (the 'opposite' one on the grid), transposing it as part of the communication. I'm using MPI_Type_create_subarray which has an argument for the order, either MPI_ORDER_C or MPI_ORDER_FORTRAN for row-major and column-major respectively. I thought that if I sent as one of these, and received as the other, then my matrix would be transposed as part of the communication. However, this doesn't seem to happen - it just stays non-transposed. The important part of the code

How to use mpirun to use different CPU cores for different programs?

一笑奈何 提交于 2019-12-04 19:18:59
I have a virtual machine with 32 cores. I am running some simulations for which I need to utilize 16 cores at one time. I use the below command to run a job on 16 cores : mpirun -n 16 program_name args > log.out 2>&1 This program runs on 16 cores. Now if I want to run the same programs on the rest of the cores, with different arguments, I use the same command like mpirun -n 8 program_name diff_args > log_1.out 2>&1 The second process utilizes the same 16 cores that were utilized earlier. How can use mpirun to run this process on 8 different cores, not the previous 16 that first job was using.

Did I compile with OpenMPI or MPICH?

会有一股神秘感。 提交于 2019-12-04 17:04:12
I have an executable on my Linux box which I know has been compiled either with OpenMPI or MPICH libraries. Question: how to determine which one? The following diagnostic procedure assumes that MPICH/MPICH2 and Open MPI are the only possible MPI implementations that you may have linked with. Other (especially commercial) MPI implementations do exist and may have different library names and/or library symbols. First determine if you linked dynamically: % ldd my_executable linux-vdso.so.1 => (0x00007ffff972c000) libm.so.6 => /lib/libm.so.6 (0x00007f1f3c6cd000) librt.so.1 => /lib/librt.so.1

MPI BCast (broadcast) of a std::vector of structs

点点圈 提交于 2019-12-04 16:24:52
I've got a question regarding passing of a std::vector of structs via MPI. First off, details. I'm using OpenMPI 1.4.3 (MPI-2 compliant) with gcc. Note that I can't use boost MPI or OOMPI -- I'm bound to using this version. I've got a struct to aggregate some data: struct Delta { Delta() : dX(0.0), dY(0.0), dZ(0.0) {}; Delta(double dx, double dy, double dz) : dX(dx), dY(dy), dZ(dz) {}; Delta(const Delta& rhs) : dX(rhs.dX), dY(rhs.dY), dZ(rhs.dZ) {}; double dX; double dY; double dZ; }; typedef std::vector<Delta> DeltaLine; and I have a DeltaLine that I'd like to broadcast, via MPI, to all the

Eclipse PTP: Running parallel (MPI) applications on the local machine?

China☆狼群 提交于 2019-12-04 15:43:37
How must eclipse PTP be configured to run MPI applications using OpenMPI on the local machine? Using "Add Resource Manager", I can choose OpenMPI and switch on to Localhost in "Connection name". But still, I'm asked for some user and password name. Is this the right way? EmSs Do this sudo apt-get install openssh-server openssh-client Then follow this instructions on the PTP documentation. 来源: https://stackoverflow.com/questions/12051266/eclipse-ptp-running-parallel-mpi-applications-on-the-local-machine

Dynamic nodes in OpenMPI

这一生的挚爱 提交于 2019-12-04 11:13:25
In MPI, is it possible to add new nodes after it is started? For example, I have 2 computers already running a parallel MPI application. I start another instance of this application on a third computer and add it to the existing communicator. All computers are in a local network. Steve Blackwell No, it's not currently possible to add new nodes to a running MPI application. MPI is designed to know the total number of nodes when the program starts. Work is being done (on MPI-3 , for example) on handling nodes that go down. Maybe if you can add faulty nodes back, then you can add new ones, but

What is easier to learn and debug OpenMP or MPI?

拈花ヽ惹草 提交于 2019-12-04 09:53:44
I have a number crunching C/C++ application. It is basically a main loop for different data sets. We got access to a 100 node cluster with openmp and mpi available. I would like to speedup the application but I am an absolut newbie for both mpi and openmp. I just wonder what is the easiest one to learn and to debug even if the performance is not the best. I also wonder what is the most adequate for my main loop application. Thanks If your program is just one big loop using OpenMP can be as simple as writing: #pragma omp parallel for OpenMP is only useful for shared memory programming, which