I am working on a tool to model wave energy converters, where I need to couple two software packages to each other. One program is written in Fortran, the other one in C++.
If you are to start both the Fortran program and the Python one in the same MPI job, you have to use something like:
mpiexec -n 1 fortran_program : -n 1 python main.py
The Fortran program will become MPI rank 0 and the Python program will be MPI rank 1. You can also start more than one of each executables, for example:
mpiexec -n 2 fortran_program : -n 4 python main.py
Ranks 0 and 1 will be from the Fortran program, ranks 2 to 5 - from the Python one.
Also note that comm.recv()
and the other communication methods in mpi4py that start with small letters (comm.send()
, comm.irecv()
, etc.) use Pickle under the hood and actually operate with serialised Python objects. This is not compatible with the character array sent by the Fortran code. You have to use the communication methods that start with capital letter (comm.Send()
, comm.Recv()
, etc.) that operate on NumPy arrays and receive explicit type information. Unfortunately, my Python fu is weak and I cannot provide a complete working example right now, but the MPI part should be something like this (unverified code):
# Create an MPI status object
status = MPI.Status()
# Wait for a message without receiving it
comm.Probe(source=0, tag=22, status=status)
# Check the length of the message
nchars = status.Get_count(MPI.CHARACTER)
# Allocate a big enough data array of characters
data = np.empty(nchars, dtype='S')
# Receive the message
comm.Recv([data, MPI.CHARACTER], source=0, tag=22)
# Construct somehow the string out of the individual chars in "data"
In the Fortran code you have to specify a destination rank of 1 (in the case you are running one Fortran executable and one Python one).
I would not use MPI for that purpose (unless parallel execution of the code is explicitly required). If your goal is to connect routines written in Fortran, C++ and Python then I suggest writing the (main) connecting part in Python while creating adaptors for your Fortran and C++ routines in order to import them in Python. Then you can manage all function calls in the main Python program and send data around as you wish.
Check out the following links:
f2py
now ships with numpy
allowing you to compile Fortran source code to Python byte code.You certainly cannot have both source and destination 0 when both are different programs. You say "from process 0 to process 0" but you clearly have two different processes! One of them has some different rank number, but you don't show your actual mpirun
command so it is hard to say which one is which.
To clarify: the MPI_COM_WORLD is the communicator for all processes executed in your mpirun or equivalent. You must leave the simple mind picture, that the first Python process is rank 0, first Fortran process is rank 0, first C++ is rank 0...
If you do
mpirun -n 1 python main.py : -n 1 ./fortran_main : -n 1 ./c++_main
then in MPI_COMM_WORLD the Python program will be rank 0, the Fortran process will be rank 1 and the C++ will be rank 2. You can create communicators local only to the Python subset or to the Fortran subset or the C++ one and you will have rank 0 in each of them, but that will be numbering within a different communicator, not in MPI_COMM_WORLD.
An MPI process can spawn processes by using the function MPI_Comm_spawn(). In a python program, this function is a method of the communicator: comm.Spawn()
. See the mpi4py tutorial for an example. The spawned process is ran according to an executable which could be another python program, a c/c++/fortran program or whatever you want. Then, an intercommunicator can be merged to define an intracommunicator between the master process and the spawned ones as performed in mpi4py: Communicating between spawned processes As a result, the master process and the spawned processes can freely communicate without any restriction.
Let's introduce a Python / c example. The Python code spawn the process and receives a character:
from mpi4py import MPI
import sys
import numpy
'''
slavec is an executable built starting from slave.c
'''
# Spawing a process running an executable
# sub_comm is an MPI intercommunicator
sub_comm = MPI.COMM_SELF.Spawn('slavec', args=[], maxprocs=1)
# common_comm is an intracommunicator accross the python process and the spawned process. All kind sof collective communication (Bcast...) are now possible between the python process and the c process
common_comm=sub_comm.Merge(False)
#print 'parent in common_comm ', common_comm.Get_rank(), ' of ',common_comm.Get_size()
data = numpy.arange(1, dtype='int8')
common_comm.Recv([data, MPI.CHAR], source=1, tag=0)
print "Python received message from C:",data
# disconnecting the shared communicators is required to finalize the spawned process.
common_comm.Disconnect()
sub_comm.Disconnect()
The C code compiled by mpicc slave.c -o slavec -Wall
sends the character using the merged communicator:
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc,char *argv[])
{
int rank,size;
MPI_Comm parentcomm,intracomm;
MPI_Init( &argc, &argv );
//MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_get_parent( &parentcomm );
if (parentcomm == MPI_COMM_NULL){fprintf(stderr,"module1 : i'm supposed to be the spawned process!");exit(1);}
MPI_Intercomm_merge(parentcomm,1,&intracomm);
MPI_Comm_size(intracomm, &size);
MPI_Comm_rank(intracomm, &rank);
//printf("child had rank %d in communicator of size %d\n",rank,size);
char s= 42;
printf("sending message %d from C\n",s);
MPI_Send(&s,1,MPI_CHAR,0,0,intracomm);
MPI_Comm_disconnect(&intracomm); //disconnect after all communications
MPI_Comm_disconnect(&parentcomm);
MPI_Finalize();
return 0;
}
Let's receive a character from a C++ code and send an integer to a fortran program:
'''
slavecpp is an executable built starting from slave.cpp
'''
# Spawing a process running an executable
# sub_comm is an MPI intercommunicator
sub_comm = MPI.COMM_SELF.Spawn('slavecpp', args=[], maxprocs=1)
# common_comm is an intracommunicator accross the python process and the spawned process. All kind sof collective communication (Bcast...) are now possible between the python process and the c process
common_comm=sub_comm.Merge(False)
#print 'parent in common_comm ', common_comm.Get_rank(), ' of ',common_comm.Get_size()
data = numpy.arange(1, dtype='int8')
common_comm.Recv([data, MPI.CHAR], source=1, tag=0)
print "Python received message from C++:",data
# disconnecting the shared communicators is required to finalize the spawned process.
common_comm.Disconnect()
sub_comm.Disconnect()
'''
slavef90 is an executable built starting from slave.cpp
'''
# Spawing a process running an executable
# sub_comm is an MPI intercommunicator
sub_comm = MPI.COMM_SELF.Spawn('slavef90', args=[], maxprocs=1)
# common_comm is an intracommunicator accross the python process and the spawned process. All kind sof collective communication (Bcast...) are now possible between the python process and the c process
common_comm=sub_comm.Merge(False)
#print 'parent in common_comm ', common_comm.Get_rank(), ' of ',common_comm.Get_size()
data = numpy.arange(1, dtype='int32')
data[0]=42
print "Python sending message to fortran:",data
common_comm.Send([data, MPI.INT], dest=1, tag=0)
print "Python over"
# disconnecting the shared communicators is required to finalize the spawned process.
common_comm.Disconnect()
sub_comm.Disconnect()
The C++ program compiled by mpiCC slave.cpp -o slavecpp -Wall
is very close to the C one:
#include <iostream>
#include <mpi.h>
#include <stdlib.h>
using namespace std;
int main(int argc,char *argv[])
{
int rank,size;
MPI_Comm parentcomm,intracomm;
MPI_Init( &argc, &argv );
//MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_get_parent( &parentcomm );
if (parentcomm == MPI_COMM_NULL){fprintf(stderr,"module1 : i'm supposed to be the spawned process!");exit(1);}
MPI_Intercomm_merge(parentcomm,1,&intracomm);
MPI_Comm_size(intracomm, &size);
MPI_Comm_rank(intracomm, &rank);
//cout<<"child had rank "<<rank<<" in communicator of size "<<size<<endl;
char s= 42;
cout<<"sending message "<<(int)s<<" from C++"<<endl;
MPI_Send(&s,1,MPI_CHAR,0,0,intracomm);
MPI_Comm_disconnect(&intracomm); //disconnect after all communications
MPI_Comm_disconnect(&parentcomm);
MPI_Finalize();
return 0;
}
Finally, the Fortran program compiled by mpif90 slave.f90 -o slavef90 -Wall
receives the integer:
program test
!
implicit none
!
include 'mpif.h'
!
integer :: ierr,s(1),stat(MPI_STATUS_SIZE)
integer :: parentcomm,intracomm
!
call MPI_INIT(ierr)
call MPI_COMM_GET_PARENT(parentcomm, ierr)
call MPI_INTERCOMM_MERGE(parentcomm, 1, intracomm, ierr)
call MPI_RECV(s, 1, MPI_INTEGER, 0, 0, intracomm,stat, ierr)
print*, 'fortran program received: ', s
call MPI_COMM_DISCONNECT(intracomm, ierr)
call MPI_COMM_DISCONNECT(parentcomm, ierr)
call MPI_FINALIZE(ierr)
endprogram test
With a little more work on the communicators, the "C++ process" could send a message directly to the "fortran process", without even involving the master process in the communication.
Lastly, mixing languages in this way may seem easy, but it may not be a good solution in the long term. Indeed, you may face issues related to performances or maintaining the system may become difficult(three languages...). For the C++ part, Cython and F2PY can be a valuable alternative. After all, Python is a little bit like a glue...