Exit when one process in pipe fails

后端 未结 2 418
失恋的感觉
失恋的感觉 2020-11-29 06:31

The goal was to make a simple unintrusive wrapper that traces stdin and stdout to stderr:

#!/bin/bash

tee /dev/stderr | ./script.sh | tee /dev/stderr

exit          


        
相关标签:
2条回答
  • 2020-11-29 07:08

    I think that you're looking for the pipefail option. From the bash man page:

    pipefail

    If set, the return value of a pipeline is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands in the pipeline exit successfully. This option is disabled by default.

    So if you start your wrapper script with

    #!/bin/bash
    
    set -e
    set -o pipefail
    

    Then the wrapper will exit when any error occurs (set -e) and will set the status of the pipeline in the way that you want.

    0 讨论(0)
  • 2020-11-29 07:14

    The main issue at hand here is clearly the pipe. In bash, when executing a command of the form

    command1 | command2
    

    and command2 dies or terminates, the pipe which receives the output (/dev/stdout) from command1 becomes broken. The broken pipe, however, does not terminate command1. This will only happen when it tries to write to the broken pipe, upon which it will exit with sigpipe. A simple demonstration of this can be seen in this question.

    If you want to avoid this problem, you should make use of process substitution in combination with input redirection. This way, you avoid pipes. The above pipeline is then written as:

    command2 < <(command1)
    

    In the case of the OP, this would become:

    ./script.sh < <(tee /dev/stderr) | tee /dev/stderr
    

    which can also be written as:

    ./script.sh < <(tee /dev/stderr) > >(tee /dev/stderr)
    
    0 讨论(0)
提交回复
热议问题