Restart a mpi slave after checkpoint before failure on ARMv6

我是研究僧i 提交于 2019-12-02 07:12:04

问题


UPDATE

I have an university project in which I should build up a cluster with RPis. Now we have a fully functional system with BLCR/MPICH on. BLCR works very well with normal processes linked with the lib. Demonstrations we have to show from our management web interface are:

  1. parallel execution of a job
  2. migration of processes across the nodes
  3. fault tolerance with MPI

We are allowed to use the simplest computations. The first one we got easily, with MPI too. The second point we actually have only working with normal processes (without MPI). Regarding the third point I have less idea how to implement a master-slave MPI scheme, in which I can restart a slave process, which also affects point two because we should/can/have_to make a checkpoint of the slave process, kill/stop it and restart it on another node. I know that I have to handle the MPI_Errors myself but how to restore the process? It would be nice if someone could post me a link or paper (with explanations) at least.

Thanks in advance

UPDATE: As written earlier our BLCR+MPICH stuff works or seems to. But... When I start MPI Processes checkpointing seems to work well.

Here the proof:

... snip ...
Benchmarking: dynamic_5: md5($s.$p.$s) [32/32 128x1 (MD5_Body)]... DONE
Many salts: 767744 c/s real, 767744 c/s virtual
Only one salt:  560896 c/s real, 560896 c/s virtual

Benchmarking: dynamic_5: md5($s.$p.$s) [32/32 128x1 (MD5_Body)]... [proxy:0:0@node2] requesting checkpoint
[proxy:0:0@node2] checkpoint completed
[proxy:0:1@node1] requesting checkpoint
[proxy:0:1@node1] checkpoint completed
[proxy:0:2@node3] requesting checkpoint
[proxy:0:2@node3] checkpoint completed
... snip ...

If I kill one Slave-Process on any node I get this here:

... snip ...
===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   EXIT CODE: 9
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
... snip ...

It is ok because we have a checkpoint so we can restart our application. But it doesn't work:

pi        7380  0.0  0.2   2984  1012 pts/4    S+   16:38   0:00 mpiexec -ckpointlib blcr -ckpoint-prefix /tmp -ckpoint-num 0 -f /tmp/machinefile -n 3
pi        7381  0.1  0.5   5712  2464 ?        Ss   16:38   0:00 /usr/bin/ssh -x 192.168.42.101 "/usr/local/bin/mpich/bin/hydra_pmi_proxy" --control-port masterpi:47698 --rmk user --launcher ssh --demux poll --pgid 0 --retries 10 --usize -2 --proxy-id 0
pi        7382  0.1  0.5   5712  2464 ?        Ss   16:38   0:00 /usr/bin/ssh -x 192.168.42.102 "/usr/local/bin/mpich/bin/hydra_pmi_proxy" --control-port masterpi:47698 --rmk user --launcher ssh --demux poll --pgid 0 --retries 10 --usize -2 --proxy-id 1
pi        7383  0.1  0.5   5712  2464 ?        Ss   16:38   0:00 /usr/bin/ssh -x 192.168.42.105 "/usr/local/bin/mpich/bin/hydra_pmi_proxy" --control-port masterpi:47698 --rmk user --launcher ssh --demux poll --pgid 0 --retries 10 --usize -2 --proxy-id 2
pi        7438  0.0  0.1   3548   868 pts/1    S+   16:40   0:00 grep --color=auto mpi

I don't know why but the first time I restart the app on every node the process seems to be restarted (I got it from using top or ps aux | grep "john" but no output to the management (or on the management console/terminal) is shown. It just hangs up after showing me:

mpiexec -ckpointlib blcr -ckpoint-prefix /tmp -ckpoint-num 0 -f /tmp/machinefile -n 3
Warning: Permanently added '192.168.42.102' (ECDSA) to the list of known hosts.
Warning: Permanently added '192.168.42.101' (ECDSA) to the list of known hosts.
Warning: Permanently added '192.168.42.105' (ECDSA) to the list of known hosts.

My plan B is just to test with own application if the BLCR/MPICH stuff really works. Maybe there some troubles with john.

Thanks in advance

**

UPDATE

** Next problem with simple hello world. I dispair slowly. Maybe I'm confused too much.

mpiexec -ckpointlib blcr -ckpoint-prefix /tmp/ -ckpoint-interval 3 -f /tmp/machinefile -n 4 ./hello
Warning: Permanently added '192.168.42.102' (ECDSA) to the list of known hosts.
Warning: Permanently added '192.168.42.105' (ECDSA) to the list of known hosts.
Warning: Permanently added '192.168.42.101' (ECDSA) to the list of known hosts.
[proxy:0:0@node2] requesting checkpoint
[proxy:0:0@node2] checkpoint completed
[proxy:0:1@node1] requesting checkpoint
[proxy:0:1@node1] checkpoint completed
[proxy:0:2@node3] requesting checkpoint
[proxy:0:2@node3] checkpoint completed
[proxy:0:0@node2] requesting checkpoint
[proxy:0:0@node2] HYDT_ckpoint_checkpoint (./tools/ckpoint/ckpoint.c:111): Previous checkpoint has not completed.[proxy:0:0@node2] HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:905): checkpoint suspend failed
[proxy:0:0@node2] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[proxy:0:0@node2] main (./pm/pmiserv/pmip.c:206): demux engine error waiting for event
[proxy:0:1@node1] requesting checkpoint
[proxy:0:1@node1] HYDT_ckpoint_checkpoint (./tools/ckpoint/ckpoint.c:111): Previous checkpoint has not completed.[proxy:0:1@node1] HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:905): checkpoint suspend failed
[proxy:0:1@node1] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[proxy:0:1@node1] main (./pm/pmiserv/pmip.c:206): demux engine error waiting for event
[proxy:0:2@node3] requesting checkpoint
[proxy:0:2@node3] HYDT_ckpoint_checkpoint (./tools/ckpoint/ckpoint.c:111): Previous checkpoint has not completed.[proxy:0:2@node3] HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:905): checkpoint suspend failed
[proxy:0:2@node3] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[proxy:0:2@node3] main (./pm/pmiserv/pmip.c:206): demux engine error waiting for event
[mpiexec@masterpi] control_cb (./pm/pmiserv/pmiserv_cb.c:202): assert (!closed) failed
[mpiexec@masterpi] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[mpiexec@masterpi] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:197): error waiting for event
[mpiexec@masterpi] main (./ui/mpich/mpiexec.c:331): process manager error waiting for completion

hello.c

/* C Example */
#include <stdio.h>
#include <mpi.h>


int main (argc, argv)
     int argc;
     char *argv[];
{
  int rank, size, i, j;
 char hostname[1024];
        hostname[1023] = '\0';
        gethostname(hostname, 1023);

  MPI_Init (&argc, &argv);      /* starts MPI */
  MPI_Comm_rank (MPI_COMM_WORLD, &rank);        /* get current process id */
  MPI_Comm_size (MPI_COMM_WORLD, &size);        /* get number of processes */
  i = 0;
  for(i ; i < 400000000; i++){
    for(j; j < 4000000; j++){
        }
  }
        printf("%s done...", hostname);
  printf("%s: %d is alive\n", hostname, getpid());
  MPI_Finalize();
  return 0;
}

来源:https://stackoverflow.com/questions/20532769/restart-a-mpi-slave-after-checkpoint-before-failure-on-armv6

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!