mpi

MPI_ERR_BUFFER when performing MPI I/O

时光毁灭记忆、已成空白 提交于 2020-01-05 18:25:49
问题 I am testing MPI I/O. subroutine save_vtk integer :: filetype, fh, unit integer(MPI_OFFSET_KIND) :: pos real(RP),allocatable :: buffer(:,:,:) integer :: ie if (master) then open(newunit=unit,file="out.vtk", & access='stream',status='replace',form="unformatted",action="write") ! write the header close(unit) end if call MPI_Barrier(mpi_comm,ie) call MPI_File_open(mpi_comm,"out.vtk", MPI_MODE_APPEND + MPI_MODE_WRONLY, MPI_INFO_NULL, fh, ie) call MPI_Type_create_subarray(3, int(ng), int(nxyz),

Installing Rmpi on Ubuntu 16.04 VirtualBox

杀马特。学长 韩版系。学妹 提交于 2020-01-05 05:32:04
问题 I created a new ubuntu-16.04.2-desktop-amd64 machine on VM VirtualBox and I want to be able to use the R environment with Rmpi. Both of the below approaches gives a similar error. Updated simplified pre-build binary approach: When Ubuntu had installed I opened a terminal and executed the following commands: ~$ sudo apt-get update ~$ sudo apt-get install openmpi-bin ~$ sudo apt-get install r-base ~$ sudo apt-get install r-cran-rmpi ~$ R This opened the R terminal and I invoking the follow

Sending large std::vector using MPI_Send and MPI_Recv doesn't complete

╄→尐↘猪︶ㄣ 提交于 2020-01-05 04:17:07
问题 I'm trying to send a std::vector using MPI. This works fine when the the vector is small, but just doesn't work when the vector is large (more than ~15k doubles in the vector). When trying to send a vector with 20k doubles, the program just sits there with the CPU at 100%. Here is a minimal example #include <vector> #include <mpi.h> using namespace std; vector<double> send_and_receive(vector<double> &local_data, int n, int numprocs, int my_rank) { MPI_Send(&local_data[0], n, MPI_DOUBLE, 0, 0,

How do I free a boost::mpi::request?

时光总嘲笑我的痴心妄想 提交于 2020-01-05 03:35:28
问题 I'm trying to get MPI to disconnect a communicator, which is a tetchy business - I've put together a demo below. I've got two versions of the same idea, listening for an int, one using MPI_IRecv, and one using a boost::mpi::request. You'll note when using mpiexec -n 2 on this program that version A will happily disconnect and exit, but version B will not. Is there some trick to MPI_Request_free-ing a boost::mpi::request? That seems to be the difference here. If it matters, I'm using MSVC and

How do I free a boost::mpi::request?

岁酱吖の 提交于 2020-01-05 03:35:07
问题 I'm trying to get MPI to disconnect a communicator, which is a tetchy business - I've put together a demo below. I've got two versions of the same idea, listening for an int, one using MPI_IRecv, and one using a boost::mpi::request. You'll note when using mpiexec -n 2 on this program that version A will happily disconnect and exit, but version B will not. Is there some trick to MPI_Request_free-ing a boost::mpi::request? That seems to be the difference here. If it matters, I'm using MSVC and

Different MPI_Datatypes for Sender and Receiver

眉间皱痕 提交于 2020-01-04 19:53:19
问题 With MPI, can data be sent and received with different MPI_Datatypes that are derived from the same base type and have the same total length? Consider two MPI processes A and B . A has an array double a[n] and B has an array double b[m] . Both processes know that A wants to send B k doubles that are somehow distributed in a (only A has knowledge about this distribution). B (and only B ) knows how it wants to arrange the k doubles in b . So both create (via MPI_Type_indexed and MPI_Type_commit

Difference between multi-process programming with fork and MPI

隐身守侯 提交于 2020-01-04 11:10:13
问题 Is there a difference in performance or other between creating a multi-process program using the linux "fork" and the functions available in the MPI library? Or is it just easier to do it in MPI because of the ready to use functions? 回答1: They don't solve the same problem. Note the difference between parallel programming and distributed-memory parallel programming. Using the fork/join model you mentioned usually is for parallel programming on the same physical machine. You generally don't

Proper use of MPI_THREAD_SERIALIZED with pthreads

笑着哭i 提交于 2020-01-04 09:06:52
问题 After reading some MPI specs I'm lead to understand that, when initializing with MPI_THREAD_SERIALIZED, a program must ensure that MPI_Send/Recv calls that occur in separate threads must not overlap. In other words, you need a mutex to protect MPI calls. Consider this situation: Mutex mpi_lock = MUTEX_INITIALIZER; void thread1_function(){ while(true){ /* things happen */ lock(mpi_lock); MPI_Send(/* some message */); unlock(mpi_lock); /* eventually break out of loop */ } } void thread2

MPI - one function for MPI_Init and MPI_Init_thread

只愿长相守 提交于 2020-01-04 07:10:32
问题 Is it possible to have one function to wrap both MPI_Init and MPI_Init_thread ? The purpose of this is to have a cleaner API while maintaining backward compatibility. What happens to a call to MPI_Init_thread when it is not supported by the MPI run time? How do I keep my wrapper function working for MPI implementations when MPI_Init_thread is not supported? 回答1: MPI_INIT_THREAD is part of the MPI-2.0 specification, which was released 15 years ago. Virtually all existing MPI implementations

MPI - one function for MPI_Init and MPI_Init_thread

眉间皱痕 提交于 2020-01-04 07:10:22
问题 Is it possible to have one function to wrap both MPI_Init and MPI_Init_thread ? The purpose of this is to have a cleaner API while maintaining backward compatibility. What happens to a call to MPI_Init_thread when it is not supported by the MPI run time? How do I keep my wrapper function working for MPI implementations when MPI_Init_thread is not supported? 回答1: MPI_INIT_THREAD is part of the MPI-2.0 specification, which was released 15 years ago. Virtually all existing MPI implementations