Hudson : “yes: standard output: Broken pipe”

后端 未结 5 860
一生所求
一生所求 2021-02-18 19:02

I need to run a shell script in hudson. That script needs an answer from the user. To give an automatic answer I did the following command line :  

yes         


        
相关标签:
5条回答
  • 2021-02-18 19:14

    But how would you explain that I dont get this error while running the script locally, but I get the error when running it remotely from a Hudson job?

    When you are running it in a terminal (locally); yes is killed by SIGPIPE signal that is generated when it tries to write to the pipe when MyScript.sh has already exited.

    Whatever runs the command (remotely) in Hudson traps that signal (set its handler to SIG_IGN, you can test it by running trap command and searching for SIGPIPE in the output) and it doesn't restore the signal for new child processes (yes and whatever runs MyScript.sh e.g., sh in your case). It leads to the write error (EPIPE) instead of the signal. yes detects the write error and reports it.

    You can simply ignore the error message:

    yes 2>/dev/null | ./MyScript.sh
    

    You could also report the bug against the component that runs the pipeline. The bug is in not restoring SIGPIPE to the default handler after the child is forked. It is what programs expect when they are run in a terminal on POSIX systems. Though I don't know whether there is a standard way to do it for a java-based program. jvm probably raises an exception for every write error so not-dying on SIGPIPE is not a problem for a java program.

    It is common for daemons such as hudson process to ignore SIGPIPE signal. You don't want your daemon to die only because the process you are communicating with dies and you would check for write errors anyway.

    Ordinary programs that are written to be run in a terminal do not check status of every printf() for errors but you want them to die if programs down the pipeline die e.g., if you run source | sink pipeline; usually you want source process to exit as soon as possible if sink exits.

    EPIPE write error is returned if SIGPIPE signal is disabled (as it looks like in hudson's case) or if a program does not die on receiving it (yes program does not defined any handlers for SIGPIPE so it should die on receiving the signal).

    I don't want to ignore the error, I want to do the right command or fix to get rid of the error.

    the only way yes process stops if it is killed or encountered a write error. If SIGPIPE signal is set to be ignored (by the parent) and no other signal kills the process then yes receives write error on ./MyScript.sh exit. There are no other options if you use yes program.

    SIGPIPE signal and EPIPE error communicate the exact same information -- pipe is broken. If SIGPIPE were enabled for yes process then you wouldn't see the error. And only because you see it; nothing new happens. It just means that ./MyScript.sh exited (successfully or unsuccessfully -- doesn't matter).

    0 讨论(0)
  • 2021-02-18 19:17

    Since yes and ./MyScript.sh can each be run in an explicit subshell, it is possible to background the yes command, send yespid to the ./MyScript.sh subshell and then implement a trap on EXIT there to manually terminate the yes command. (The trap on EXIT should always be implemented in the subshell of the last command of a piped commmand sequence).

    # avoid hangup or "broken pipe" error message when parent process set SIGPIPE to be ignored
    # sleep 0 or cat /dev/null: do nothing but with external command (for a shell builtin command see: help :)
    (
    trap "" PIPE
    ( (sleep 0; exec yes) & echo ${!}; wait ${!} ) | 
       ( 
         trap 'trap - EXIT; kill "$yespid"; exit 0' EXIT
         yespid="$(head -n 1)"
         head -n 10  # replacement for ./MyScript.sh
       )
    echo ${PIPESTATUS[*]}
    )
    

    If you want to exit the yes subshell with exit code 0 you can do this as well:

    # avoid hangup or "broken pipe" error message when parent process set SIGPIPE to be ignored
    # set exit code of yes subshell to 0
    (
    trap "" PIPE
       (
          trap 'trap - TERM; echo "kill from yes subshell ..." 1>&2; kill "${!}"; exit 0' TERM 
          subshell_pid="$(bash -c 'echo "$PPID"')"
          (sleep 0; exec yes) & echo "${subshell_pid}"; wait ${!} 
       ) | 
       ( 
          trap 'trap - EXIT; kill -s TERM "$subshell_pid"; exit' EXIT
          subshell_pid="$(head -n 1)"
          head -n 10  # replacement for ./MyScript.sh
       )
    echo ${PIPESTATUS[*]}
    )
    
    0 讨论(0)
  • 2021-02-18 19:24

    The command yes being running in an infinite loop I supposed that this might be the solution :

    yes | head -1 | ./MyScript.sh #only one "Y" would be output of the command yes
    

    But, I got the same error.

    We can redirect the error to /dev/null as suggested by @J.F. Sebastian, or enforce that the command is correct by this :

    yes | head -1 | ./MyScript.sh || yes
    

    But, this suggestions were less appreciated. And so, I had to create my own named pipe, as follow :

    mkfifo /tmp/my_fifo #to create the named pipe
    exec 3<>/tmp/my_fifo #to make the named pipe in read and write mode by assigning it to a file descriptor
    echo "Y" >/tmp/my_fifo #to write into the named pipe, "Y" is the default value of yes
    ./MyScript.sh </tmp/my_fifo #to read from the named pipe
    rm /tmp/my_fifo #remove the named pipe
    

    I'm expecting more valuable solutions with greater explainations.

    Here it is an explaination for a file descriptor in linux.

    Thanks

    0 讨论(0)
  • 2021-02-18 19:26

    I had this error, and my problem with it is not that it output yes: standard output: Broken pipe but rather than it returns an error code.

    Because I run my script with bash strict mode including -o pipefail, when yes "errors" it causes my script to error.

    How to avoid an error

    The way I avoided this is like so:

    bash -c "yes || true" | my-script.sh
    
    0 讨论(0)
  • 2021-02-18 19:37

    You are trying to use the yes program to pipe to the script? or echo yes to the script? If the process is working through jenkins, add "; true" to the end of your shell command.

    0 讨论(0)
提交回复
热议问题