Is it possible to run OpenMPI on a local computer AND a remote cluster?

早过忘川 提交于 2019-12-03 08:44:35
Hristo Iliev

Yes, it is possible, as long as there is a network path between the cluster node and your machine. The MPI standard provides the abstract mechanisms to do it, while Open MPI provides a really simple way to make the things work. You have to look into the Process Creation and Management section of the standard (Chapter 10 of MPI-2.2), and specifically into the Establishing Communication subsection (§10.4 of MPI-2.2). Basically the steps are:

  1. You start both MPI jobs separately. This is obviously what you do, so nothing new here.
  2. One of the jobs creates a network port using MPI_Open_port(). This MPI call returns a unique port name that then has to be published as a well-known service name using MPI_Publish_name(). Once the port is opened, it can be used to accept client connections by calling the blocking routine MPI_Comm_accept(). The job has now become the server job.
  3. The other MPI job, referred to as the client job, first resolves the port name from the service name using MPI_Lookup_name(). Once it has the port name, it can call MPI_Comm_connect() in order to connect to the remote server.
  4. Once MPI_Comm_connect() is paired with the respective MPI_Comm_accept(), both jobs will establish an intercommunicator between them and messages could then be sent back and forth.

One intricate detail is how the client job could look up the port name given the service name? This is a less documented part of Open MPI, but it is quite easy: you have to provide the mpiexec command that you use to start the client job with the URI of the mpiexec of the server job, which acts as a sort of directory service. To do that, you should launch the server job with the --report-uri - argument to make it print its URI to the standard output:

$ mpiexec --report-uri - <other arguments like -np> ./server ...

It will give you a long URI of the form 1221656576.0;tcp://10.1.13.164:36351;tcp://192.168.221.41:36351. Now you have to supply this URI to the client mpiexec with the --ompi-server uri option:

$ mpiexec --ompi-server 1221656576.0;tcp://10.1.13.164:36351... ./client ...

Note that the URI contains the addresses of all configured and enabled network interfaces that are present at the node, where the server's mpiexec is started. You should ensure that the client is able to reach at least one of them. Also ensure that you have the TCP BTL component in the list of enabled BTL components, otherwise no messages could flow. The TCP BTL is usually enabled by default, but on some InfiniBand installations it is explicitly disabled, either by setting the corresponding value of the environment variable OMPI_MCA_btl or in the default Open MPI MCA configuration file. The MCA parameters can be overridden with --mca option, for example:

$ mpiexec --mca btl self,sm,openib,tcp --report-uri - ...

Also see the answer that I gave to a similar question.

Yes, it should just work out of the box if there is a TCP/IP connection available (MPI communicates at a random, high TCP port - if TCP is used as transfer layer). Try adding your machine to the hostfile which you supply to mpirun. If that doesn't work, you can directly connect to your machine using MPI_Open_port which doesn't require mpirun.

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!