Redirect stderr and stdout in Bash

后端 未结 15 793
日久生厌
日久生厌 2020-11-22 08:18

I want to redirect both stdout and stderr of a process to a single file. How do I do that in Bash?

相关标签:
15条回答
  • 2020-11-22 08:47
    bash your_script.sh 1>file.log 2>&1
    

    1>file.log instructs the shell to send STDOUT to the file file.log, and 2>&1 tells it to redirect STDERR (file descriptor 2) to STDOUT (file descriptor 1).

    Note: The order matters as liw.fi pointed out, 2>&1 1>file.log doesn't work.

    0 讨论(0)
  • 2020-11-22 08:47

    The following functions can be used to automate the process of toggling outputs beetwen stdout/stderr and a logfile.

    #!/bin/bash
    
        #set -x
    
        # global vars
        OUTPUTS_REDIRECTED="false"
        LOGFILE=/dev/stdout
    
        # "private" function used by redirect_outputs_to_logfile()
        function save_standard_outputs {
            if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
                echo "[ERROR]: ${FUNCNAME[0]}: Cannot save standard outputs because they have been redirected before"
                exit 1;
            fi
            exec 3>&1
            exec 4>&2
    
            trap restore_standard_outputs EXIT
        }
    
        # Params: $1 => logfile to write to
        function redirect_outputs_to_logfile {
            if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
                echo "[ERROR]: ${FUNCNAME[0]}: Cannot redirect standard outputs because they have been redirected before"
                exit 1;
            fi
            LOGFILE=$1
            if [ -z "$LOGFILE" ]; then
                echo "[ERROR]: ${FUNCNAME[0]}: logfile empty [$LOGFILE]"
    
            fi
            if [ ! -f $LOGFILE ]; then
                touch $LOGFILE
            fi
            if [ ! -f $LOGFILE ]; then
                echo "[ERROR]: ${FUNCNAME[0]}: creating logfile [$LOGFILE]"
                exit 1
            fi
    
            save_standard_outputs
    
            exec 1>>${LOGFILE%.log}.log
            exec 2>&1
            OUTPUTS_REDIRECTED="true"
        }
    
        # "private" function used by save_standard_outputs() 
        function restore_standard_outputs {
            if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
                echo "[ERROR]: ${FUNCNAME[0]}: Cannot restore standard outputs because they have NOT been redirected"
                exit 1;
            fi
            exec 1>&-   #closes FD 1 (logfile)
            exec 2>&-   #closes FD 2 (logfile)
            exec 2>&4   #restore stderr
            exec 1>&3   #restore stdout
    
            OUTPUTS_REDIRECTED="false"
        }
    

    Example of usage inside script:

    echo "this goes to stdout"
    redirect_outputs_to_logfile /tmp/one.log
    echo "this goes to logfile"
    restore_standard_outputs 
    echo "this goes to stdout"
    
    0 讨论(0)
  • 2020-11-22 08:48

    You can redirect stderr to stdout and the stdout into a file:

    some_command >file.log 2>&1 
    

    See http://tldp.org/LDP/abs/html/io-redirection.html

    This format is preferred than the most popular &> format that only work in bash. In Bourne shell it could be interpreted as running the command in background. Also the format is more readable 2 (is STDERR) redirected to 1 (STDOUT).

    EDIT: changed the order as pointed out in the comments

    0 讨论(0)
  • 2020-11-22 08:49

    For situation, when "piping" is necessary you can use :

    |&

    For example:

    echo -ne "15\n100\n"|sort -c |& tee >sort_result.txt
    

    or

    TIMEFORMAT=%R;for i in `seq 1 20` ; do time kubectl get pods |grep node >>js.log  ; done |& sort -h
    

    This bash-based solutions can pipe STDOUT and STDERR separately (from STDERR of "sort -c" or from STDERR to "sort -h").

    0 讨论(0)
  • 2020-11-22 08:54
    # Close STDOUT file descriptor
    exec 1<&-
    # Close STDERR FD
    exec 2<&-
    
    # Open STDOUT as $LOG_FILE file for read and write.
    exec 1<>$LOG_FILE
    
    # Redirect STDERR to STDOUT
    exec 2>&1
    
    echo "This line will appear in $LOG_FILE, not 'on screen'"
    

    Now, simple echo will write to $LOG_FILE. Useful for daemonizing.

    To the author of the original post,

    It depends what you need to achieve. If you just need to redirect in/out of a command you call from your script, the answers are already given. Mine is about redirecting within current script which affects all commands/built-ins(includes forks) after the mentioned code snippet.


    Another cool solution is about redirecting to both std-err/out AND to logger or log file at once which involves splitting "a stream" into two. This functionality is provided by 'tee' command which can write/append to several file descriptors(files, sockets, pipes, etc) at once: tee FILE1 FILE2 ... >(cmd1) >(cmd2) ...

    exec 3>&1 4>&2 1> >(tee >(logger -i -t 'my_script_tag') >&3) 2> >(tee >(logger -i -t 'my_script_tag') >&4)
    trap 'cleanup' INT QUIT TERM EXIT
    
    
    get_pids_of_ppid() {
        local ppid="$1"
    
        RETVAL=''
        local pids=`ps x -o pid,ppid | awk "\\$2 == \\"$ppid\\" { print \\$1 }"`
        RETVAL="$pids"
    }
    
    
    # Needed to kill processes running in background
    cleanup() {
        local current_pid element
        local pids=( "$$" )
    
        running_pids=("${pids[@]}")
    
        while :; do
            current_pid="${running_pids[0]}"
            [ -z "$current_pid" ] && break
    
            running_pids=("${running_pids[@]:1}")
            get_pids_of_ppid $current_pid
            local new_pids="$RETVAL"
            [ -z "$new_pids" ] && continue
    
            for element in $new_pids; do
                running_pids+=("$element")
                pids=("$element" "${pids[@]}")
            done
        done
    
        kill ${pids[@]} 2>/dev/null
    }
    

    So, from the beginning. Let's assume we have terminal connected to /dev/stdout(FD #1) and /dev/stderr(FD #2). In practice, it could be a pipe, socket or whatever.

    • Create FDs #3 and #4 and point to the same "location" as #1 and #2 respectively. Changing FD #1 doesn't affect FD #3 from now on. Now, FDs #3 and #4 point to STDOUT and STDERR respectively. These will be used as real terminal STDOUT and STDERR.
    • 1> >(...) redirects STDOUT to command in parens
    • parens(sub-shell) executes 'tee' reading from exec's STDOUT(pipe) and redirects to 'logger' command via another pipe to sub-shell in parens. At the same time it copies the same input to FD #3(terminal)
    • the second part, very similar, is about doing the same trick for STDERR and FDs #2 and #4.

    The result of running a script having the above line and additionally this one:

    echo "Will end up in STDOUT(terminal) and /var/log/messages"
    

    ...is as follows:

    $ ./my_script
    Will end up in STDOUT(terminal) and /var/log/messages
    
    $ tail -n1 /var/log/messages
    Sep 23 15:54:03 wks056 my_script_tag[11644]: Will end up in STDOUT(terminal) and /var/log/messages
    

    If you want to see clearer picture, add these 2 lines to the script:

    ls -l /proc/self/fd/
    ps xf
    
    0 讨论(0)
  • 2020-11-22 08:57
    LOG_FACILITY="local7.notice"
    LOG_TOPIC="my-prog-name"
    LOG_TOPIC_OUT="$LOG_TOPIC-out[$$]"
    LOG_TOPIC_ERR="$LOG_TOPIC-err[$$]"
    
    exec 3>&1 > >(tee -a /dev/fd/3 | logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_OUT" )
    exec 2> >(logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_ERR" )
    

    It is related: Writing stdOut & stderr to syslog.

    It almost work, but not from xinted ;(

    0 讨论(0)
提交回复
热议问题