Inform right-hand side of pipeline of left-side failure?

▼魔方 西西 提交于 2019-12-06 18:22:47

问题


I've grown fond of using a generator-like pattern between functions in my shell scripts. Something like this:

parse_commands /da/cmd/file | process_commands

However, the basic problem with this pattern is that if parse_command encounters an error, the only way I have found to notify process_command that it failed is by explicitly telling it (e.g. echo "FILE_NOT_FOUND"). This means that every potentially faulting operation in parse_command would have to be fenced.

Is there no way process_command can detect that the left side exited with a non-zero exit code?


回答1:


Does the pipe process continue even if the first process has ended, or is the issue that you have no way of knowing that the first process failed?

If it's the latter, you can look at the PIPESTATUS variable (which is actually a BASH array). That will give you the exit code of the first command:

parse_commands /da/cmd/file | process_commands
temp=("${PIPESTATUS[@]}")
if [ ${temp[0]} -ne 0 ]
then
    echo 'parse_commands failed'
elif [ ${temp[1]} -ne 0 ]
then
    echo 'parse_commands worked, but process_commands failed'
fi

Otherwise, you'll have to use co-processes.




回答2:


Use set -o pipefail on top of your bash script so that when the left side of the pipe fails (exit status != 0), the right side does not execute.




回答3:


Unlike the and operator (&&), the pipe operator (|) works by spawning both processes simultaneously, so the first process can pipe its output to the second process without the need of buffering the intermediate data. This allows for processing of large amounts of data with little memory or disk usage.

Therefore, the exit status of the first process wouldn't be available to the second one until it's finished.




回答4:


You could try some work arround using a fifo:

mkfifo /tmp/a
cat /tmp/a | process_commands &

parse_cmd /da/cmd/file > /tmp/a || (echo "error"; # kill process_commands)



回答5:


I don't have enough reputation to comment, but the accepted answer was missing a closing } on line 5.

After fixing this, the code will throw a -ne: unary operator expected error, which points to a problem: PIPESTATUS is overwritten by the conditional following the if command, so the return value of process_commands will never be checked!

This is because [ ${PIPESTATUS[0]} -ne 0 ] is equivalent to test ${PIPESTATUS[0]} -ne 0, which changes $PIPESTATUS just like any other command. For example:

return0 () { return 0;}
return3 () { return 3;}

return0 | return3
echo "PIPESTATUS: ${PIPESTATUS[@]}"

This returns PIPESTATUS: 0 3 as expected. But what if we introduce conditionals?

return0 | return3
if [ ${PIPESTATUS[0]} -ne 0 ]; then
    echo "1st command error: ${PIPESTATUS[0]}"
elif [ ${PIPESTATUS[1]} -ne 0 ]; then
    echo "2nd command error: ${PIPESTATUS[1]}"
else
    echo "PIPESTATUS: ${PIPESTATUS[@]}"
    echo "Both return codes = 0."
fi

We get the [: -ne: unary operator expected error, and this:

PIPESTATUS: 2
Both return codes = 0.

To fix this, $PIPESTATUS should be stored in a different array variable, like so:

return0 | return3
TEMP=("${PIPESTATUS[@]}")
echo "TEMP: ${TEMP[@]}"
if [ ${TEMP[0]} -ne 0 ]; then
    echo "1st command error: ${TEMP[0]}"
elif [ ${TEMP[1]} -ne 0 ]; then
    echo "2nd command error: ${TEMP[1]}"
else
    echo "TEMP: ${TEMP[@]}"
    echo "All return codes = 0."
fi

Which prints:

TEMP: 0 3
2nd command error: 3

as intended.

Edit: I fixed the accepted answer, but I'm leaving this explanation for posterity.




回答6:


If you have command1 && command2 then command2 will only be executed when the first command is successful - otherwise boolean short-circuiting kicks in. One way of using this would be to build a first command (your parse_commands...) that dumps to a temporary and then have the second command take from that file.

Edit: By judicious use of ; you can tidy up the temporary file, e.g.

(command1 && command2) ; rm temporaryfile



回答7:


There is a way to do this in bash 4.0, which adds the coproc builtin from ash. This coprocess facility is borrowed from ksh, which uses a different syntax. The only shell I have access to on my system that supports coprocesses is ksh. Here is a solution written with ksh:

parse_commands  /da/cmd/file |&
parser=$!

process_commands <&p &
processor=$!

if wait $parser
then
    wait $processor
    exit $?
else
    kill $processor
    exit 1
fi

The idea is to start parse_commands in the background with pipes connecting it to the main shell. The pid is saved in parser. Then process_commands is started with the output of parse_commands as its input. (That is what <&p does.) This is also put in the background with its pid saved in processor.

With both of those in the background connected by a pipe, our main shell is free to wait for the parser to terminate. If it terminates without an error, we wait for the processor to finish and exit with its return code. If it terminates with an error, we kill the processor and exit with non-zero status.

It should be fairly straightforward to translate this to use the bash 4.0 / ash coproc builtin, but I don't have good documentation, nor a way to test that.




回答8:


You may run parse_commands /da/cmd/file in an explicit subshell and echo the exit status of this subshell through the pipe to process_commands which is also run in an explicit subshell to process the piped data contained in /dev/stdin.

Far from being elegant, but seems to get the job done :)

A simple example:

(
( ls -l ~/.bashrcxyz; echo $? ) | 
( 
piped="$(</dev/stdin)"; 
[[ "$(tail -n 1 <<<"$piped")" -eq 0 ]] && printf '%s\n' "$piped" | sed '$d' || exit 77 
); 
echo $?
)



回答9:


What about:

parse_commands /da/cmd/file > >(process_commands)


来源:https://stackoverflow.com/questions/6565694/inform-right-hand-side-of-pipeline-of-left-side-failure

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!