Is it possible to run OpenMPI on a local computer AND a remote cluster?

前端 未结 2 1839
忘了有多久
忘了有多久 2021-02-10 16:00

I have a set of computational operations that need to be performed a cluster (maybe like 512 MPI processes). Right now, I have the root node on the cluster open a socket and tr

2条回答
  •  一整个雨季
    2021-02-10 16:41

    Yes, it is possible, as long as there is a network path between the cluster node and your machine. The MPI standard provides the abstract mechanisms to do it, while Open MPI provides a really simple way to make the things work. You have to look into the Process Creation and Management section of the standard (Chapter 10 of MPI-2.2), and specifically into the Establishing Communication subsection (§10.4 of MPI-2.2). Basically the steps are:

    1. You start both MPI jobs separately. This is obviously what you do, so nothing new here.
    2. One of the jobs creates a network port using MPI_Open_port(). This MPI call returns a unique port name that then has to be published as a well-known service name using MPI_Publish_name(). Once the port is opened, it can be used to accept client connections by calling the blocking routine MPI_Comm_accept(). The job has now become the server job.
    3. The other MPI job, referred to as the client job, first resolves the port name from the service name using MPI_Lookup_name(). Once it has the port name, it can call MPI_Comm_connect() in order to connect to the remote server.
    4. Once MPI_Comm_connect() is paired with the respective MPI_Comm_accept(), both jobs will establish an intercommunicator between them and messages could then be sent back and forth.

    One intricate detail is how the client job could look up the port name given the service name? This is a less documented part of Open MPI, but it is quite easy: you have to provide the mpiexec command that you use to start the client job with the URI of the mpiexec of the server job, which acts as a sort of directory service. To do that, you should launch the server job with the --report-uri - argument to make it print its URI to the standard output:

    $ mpiexec --report-uri -  ./server ...
    

    It will give you a long URI of the form 1221656576.0;tcp://10.1.13.164:36351;tcp://192.168.221.41:36351. Now you have to supply this URI to the client mpiexec with the --ompi-server uri option:

    $ mpiexec --ompi-server 1221656576.0;tcp://10.1.13.164:36351... ./client ...
    

    Note that the URI contains the addresses of all configured and enabled network interfaces that are present at the node, where the server's mpiexec is started. You should ensure that the client is able to reach at least one of them. Also ensure that you have the TCP BTL component in the list of enabled BTL components, otherwise no messages could flow. The TCP BTL is usually enabled by default, but on some InfiniBand installations it is explicitly disabled, either by setting the corresponding value of the environment variable OMPI_MCA_btl or in the default Open MPI MCA configuration file. The MCA parameters can be overridden with --mca option, for example:

    $ mpiexec --mca btl self,sm,openib,tcp --report-uri - ...
    

    Also see the answer that I gave to a similar question.

提交回复
热议问题