I would need to output the output of a command on a file. Let\'s say my command is zip -r zip.zip directory
, I would need to append/write (any of these options
Redirections are immediate -- when you run somecommand | tee -a out.txt
, somecommand
is set up with its stdout sent directly to a tee
command, which is defined by its documentation to be unbuffered, and thus to write anything available on its input to its specified output sinks as quickly as possible. Similarly, somecommand >out.txt
sets somecommand
to be writing to out.txt
literally before it's even started.
What's not immediate is flushing of buffered output.
That is to say: The standard C library, and most other tools/languages, buffer output on stdout, combining small writes into big ones. This is generally desirable, inasmuch as decreases the number of calls to and from kernel space ("context switches") in favor of doing a smaller number of more efficient, larger writes.
So your program isn't really waiting until it exits to write its output -- but it is waiting until its buffer (of maybe 32kb, or 64kb, or whatever) is full. If it never generates that much output at all, then it only gets flushed when closing the output stream.
If you're on a GNU platform, and your program is leaving its file descriptors the way it found them rather than trying to configure buffering explicitly, you can use the stdbuf
command to configure buffering like so:
stdbuf -oL somecommand | tee -a out.txt
defines stdout (-o
) to be line-buffered (L
) when running somecommand
.
Alternately, if you have expect
installed, you can use the unbuffer
helper it includes:
unbuffer somecommand | tee -a out.txt
...which will actually simulate a TTY (as expect
does), getting the same non-buffered behavior you have when somecommand
is connected directly to a console.
Did you try option command > out.log 2>&1
this log to file everything without displaying anything, everything will go straight to the file