I am working on a pipeline that has a few branch points that subsequently merge-- they look something like this:
command2
/ \\
command1
Please see also https://unix.stackexchange.com/questions/28503/how-can-i-send-stdout-to-multiple-commands. Amongst all answers, I found this answer particularly fits my need.
Expand a little bit @Soren's answer,
$ ((date | tee >( wc >&3) | wc) 3>&1) | cat -n
1 1 6 29
2 1 6 29
You can do without using tee but an environment variable,
$ (z=$(date); (echo "$z"| wc ); (echo "$z"| wc) ) | cat -n
1 1 6 29
2 1 6 29
In my case, I applied this technique and wrote a much complex script that runs under busybox.
You can play around with file descriptors like this;
((date | tee >( wc >&3) | wc) 3>&1) | wc
or
((command1 | tee >( command2 >&3) | command3) 3>&1) | command4
To explain, that is tee >( wc >&3)
will output the original data on stdout, and the inner wc
will output the result on FD 3. The outer 3>&1) will then merge FD3 output back into STDOUT so output from both wc is sent to the tailing command.
HOWEVER, there is nothing in this pipeline (or the one in your own solution) which will guanrantee that the output will not be mangled. That is incomplete lines from command2 will not be mixed up with lines of command3 -- if that is a concern, you will need to do one of two things;
tee
program which internally uses popen and read each line back before sending complete lines to stdout for command4 to readcat
to merge the data as input to command4I believe your solution is good and it uses tee as documented. If you read manpage of tee, it says:
Copy standard input to each FILE, and also to standard output
Your files are process substitutions.
And the standard output is what you need to remove, because you don't want it, and that's what you did with redirecting it to /dev/null