Starting a process over ssh using bash and then killing it on sigint

前端 未结 5 662

I want to start a couple of jobs on different machines using ssh. If the user then interrupts the main script I want to shut down all the jobs gracefully.

Here is a shor

相关标签:
5条回答
  • 2021-02-02 15:52

    Referencing the answer by lhunath and https://unix.stackexchange.com/questions/71205/background-process-pipe-input I came up with this script

    run.sh:

    #/bin/bash
    log="log"                                                                                 
    eval "$@" \&                                                                              
    PID=$!                                                                                    
    echo "running" "$@" "in PID $PID"> $log                                                   
    { (cat <&3 3<&- >/dev/null; kill $PID; echo "killed" >> $log) & } 3<&0                              
    trap "echo EXIT >> $log" EXIT                                                             
    wait $PID
    

    The difference being that this version kills the process when the connection is closed, but also returns the exit code of the command when it runs to completion.

     $ ssh localhost ./run.sh true; echo $?; cat log
     0
     running true in PID 19247
     EXIT
    
     $ ssh localhost ./run.sh false; echo $?; cat log
     1
     running false in PID 19298
     EXIT
    
     $ ssh localhost ./run.sh sleep 99; echo $?; cat log
     ^C130
     running sleep 99 in PID 20499
     killed
     EXIT
    
     $ ssh localhost ./run.sh sleep 2; echo $?; cat log
     0
     running sleep 2 in PID 20556
     EXIT
    

    For a one-liner:

     ssh localhost "sleep 99 & PID=\$!; { (cat <&3 3<&- >/dev/null; kill \$PID) & } 3<&0; wait \$PID"
    

    For convenience:

     HUP_KILL="& PID=\$!; { (cat <&3 3<&- >/dev/null; kill \$PID) & } 3<&0; wait \$PID"
     ssh localhost "sleep 99 $HUP_KILL"
    

    Note: kill 0 may be preferred to kill $PID depending on the behavior needed with regard to spawned child processes. You can also kill -HUP or kill -INT if you desire.

    Update: A secondary job control channel is better than reading from stdin.

    ssh -n -R9002:localhost:8001 -L8001:localhost:9001 localhost ./test.sh sleep 2
    

    Set job control mode and monitor the job control channel:

    set -m
    trap "kill %1 %2 %3" EXIT
    (sleep infinity | netcat -l 127.0.0.1 9001) &
    (netcat -d 127.0.0.1 9002; kill -INT $$) &
    "$@" &
    wait %3
    

    Finally, here's another approach and a reference to a bug filed on openssh: https://bugzilla.mindrot.org/show_bug.cgi?id=396#c14

    This is the best way I have found to do this. You want something on the server side that attempts to read stdin and then kills the process group when that fails, but you also want a stdin on the client side that blocks until the server side process is done and will not leave lingering processes like <(sleep infinity) might.

    ssh localhost "sleep 99 < <(cat; kill -INT 0)" <&1
    

    It doesn't actually seem to redirect stdout anywhere but it does function as a blocking input and avoids capturing keystrokes.

    0 讨论(0)
  • 2021-02-02 15:53

    It would definitely be preferable to keep your cleanup managed by the ssh that starts the process rather than moving in for the kill with a second ssh session later on.

    When ssh is attached to your terminal; it behaves quite well. However, detach it from your terminal and it becomes (as you've noticed) a pain to signal or manage remote processes. You can shut down the link, but not the remote processes.

    That leaves you with one option: Use the link as a way for the remote process to get notified that it needs to shut down. The cleanest way to do this is by using blocking I/O. Make the remote read input from ssh and when you want the process to shut down; send it some data so that the remote's reading operation unblocks and it can proceed with the cleanup:

    command & read; kill $!
    

    This is what we would want to run on the remote. We invoke our command that we want to run remotely; we read a line of text (blocks until we receive one) and when we're done, signal the command to terminate.

    To send the signal from our local script to the remote, all we need to do now is send it a line of text. Unfortunately, Bash does not give you a lot of good options, here. At least, not if you want to be compatible with bash < 4.0.

    With bash 4 we can use co-processes:

    coproc ssh user@host 'command & read; kill $!'
    trap 'echo >&"${COPROC[1]}"' EXIT
    ...
    

    Now, when the local script exits (don't trap on INT, TERM, etc. Just EXIT) it sends a new line to the file in the second element of the COPROC array. That file is a pipe which is connected to ssh's stdin, effectively routing our line to ssh. The remote command reads the line, ends the read and kills the command.

    Before bash 4 things get a bit harder since we don't have co-processes. In that case, we need to do the piping ourselves:

    mkfifo /tmp/mysshcommand
    ssh user@host 'command & read; kill $!' < /tmp/mysshcommand &
    trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT
    

    This should work in pretty much any bash version.

    0 讨论(0)
  • 2021-02-02 15:55

    Try this:

    ssh -tt host command </dev/null &
    

    When you kill the local ssh process, the remote pty will close and SIGHUP will be sent to the remote process.

    0 讨论(0)
  • 2021-02-02 15:56

    The solution for bash 3.2:

    mkfifo /tmp/mysshcommand
    ssh user@host 'command & read; kill $!' < /tmp/mysshcommand &
    trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT
    

    doesn't work. The ssh command is not on the ps list on the "client" machine. Only after I echo something into the pipe will it appear in the process list of the client machine. The process that appears on the "server" machine would just be the command itself, not the read/kill part.

    Writing again into the pipe does not terminate the process.

    So summarizing, I need to write into the pipe for the command to start up, and if I write again, it does not kill the remote command, as expected.

    0 讨论(0)
  • 2021-02-02 16:05

    You may want to consider mounting the remote file system and run the script from the master box. For instance, if your kernel is compiled with fuse (can check with the following):

    /sbin/lsmod | grep -i fuse
    

    You can then mount the remote file system with the following command:

    sshfs user@remote_system: mount_point
    

    Now just run your script on the file located in mount_point.

    0 讨论(0)
提交回复
热议问题