live output from subprocess command

后端 未结 16 1155
爱一瞬间的悲伤
爱一瞬间的悲伤 2020-11-22 08:16

I\'m using a python script as a driver for a hydrodynamics code. When it comes time to run the simulation, I use subprocess.Popen to run the code, collect the

相关标签:
16条回答
  • 2020-11-22 09:07
    import os
    
    def execute(cmd, callback):
        for line in iter(os.popen(cmd).readline, ''): 
                callback(line[:-1])
    
    execute('ls -a', print)
    
    0 讨论(0)
  • 2020-11-22 09:11

    Solution 1: Log stdout AND stderr concurrently in realtime

    A simple solution which logs both stdout AND stderr concurrently, line-by-line in realtime into a log file.

    import subprocess as sp
    from concurrent.futures import ThreadPoolExecutor
    
    
    def log_popen_pipe(p, stdfile):
    
        with open("mylog.txt", "w") as f:
    
            while p.poll() is None:
                f.write(stdfile.readline())
                f.flush()
    
            # Write the rest from the buffer
            f.write(stdfile.read())
    
    
    with sp.Popen(["ls"], stdout=sp.PIPE, stderr=sp.PIPE, text=True) as p:
    
        with ThreadPoolExecutor(2) as pool:
            r1 = pool.submit(log_popen_pipe, p, p.stdout)
            r2 = pool.submit(log_popen_pipe, p, p.stderr)
            r1.result()
            r2.result()
    

    Solution 2: A function read_popen_pipes() that allows you to iterate over both pipes (stdout/stderr), concurrently in realtime

    import subprocess as sp
    from queue import Queue, Empty
    from concurrent.futures import ThreadPoolExecutor
    
    
    def enqueue_output(file, queue):
        for line in iter(file.readline, ''):
            queue.put(line)
        file.close()
    
    
    def read_popen_pipes(p):
    
        with ThreadPoolExecutor(2) as pool:
            q_stdout, q_stderr = Queue(), Queue()
    
            pool.submit(enqueue_output, p.stdout, q_stdout)
            pool.submit(enqueue_output, p.stderr, q_stderr)
    
            while True:
    
                if p.poll() is not None and q_stdout.empty() and q_stderr.empty():
                    break
    
                out_line = err_line = ''
    
                try:
                    out_line = q_stdout.get_nowait()
                    err_line = q_stderr.get_nowait()
                except Empty:
                    pass
    
                yield (out_line, err_line)
    
    # The function in use:
    
    with sp.Popen(["ls"], stdout=sp.PIPE, stderr=sp.PIPE, text=True) as p:
    
        for out_line, err_line in read_popen_pipes(p):
            print(out_line, end='')
            print(err_line, end='')
    
        p.poll()
    
    
    0 讨论(0)
  • 2020-11-22 09:13

    None of the Pythonic solutions worked for me. It turned out that proc.stdout.read() or similar may block forever.

    Therefore, I use tee like this:

    subprocess.run('./my_long_running_binary 2>&1 | tee -a my_log_file.txt && exit ${PIPESTATUS}', shell=True, check=True, executable='/bin/bash')
    

    This solution is convenient if you are already using shell=True.

    ${PIPESTATUS} captures the success status of the entire command chain (only available in Bash). If I omitted the && exit ${PIPESTATUS}, then this would always return zero since tee never fails.

    unbuffer might be necessary for printing each line immediately into the terminal, instead of waiting way too long until the "pipe buffer" gets filled. However, unbuffer swallows the exit status of assert (SIG Abort)...

    2>&1 also logs stderror to the file.

    0 讨论(0)
  • 2020-11-22 09:19

    We can also use the default file iterator for reading stdout instead of using iter construct with readline().

    import subprocess
    import sys
    process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
    for line in process.stdout:
        sys.stdout.write(line)
    
    0 讨论(0)
  • 2020-11-22 09:19

    All of the above solutions I tried failed either to separate stderr and stdout output, (multiple pipes) or blocked forever when the OS pipe buffer was full which happens when the command you are running outputs too fast (there is a warning for this on python poll() manual of subprocess). The only reliable way I found was through select, but this is a posix-only solution:

    import subprocess
    import sys
    import os
    import select
    # returns command exit status, stdout text, stderr text
    # rtoutput: show realtime output while running
    def run_script(cmd,rtoutput=0):
        p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        poller = select.poll()
        poller.register(p.stdout, select.POLLIN)
        poller.register(p.stderr, select.POLLIN)
    
        coutput=''
        cerror=''
        fdhup={}
        fdhup[p.stdout.fileno()]=0
        fdhup[p.stderr.fileno()]=0
        while sum(fdhup.values()) < len(fdhup):
            try:
                r = poller.poll(1)
            except select.error, err:
                if err.args[0] != EINTR:
                    raise
                r=[]
            for fd, flags in r:
                if flags & (select.POLLIN | select.POLLPRI):
                    c = os.read(fd, 1024)
                    if rtoutput:
                        sys.stdout.write(c)
                        sys.stdout.flush()
                    if fd == p.stderr.fileno():
                        cerror+=c
                    else:
                        coutput+=c
                else:
                    fdhup[fd]=1
        return p.poll(), coutput.strip(), cerror.strip()
    
    0 讨论(0)
  • 2020-11-22 09:20

    It looks like line-buffered output will work for you, in which case something like the following might suit. (Caveat: it's untested.) This will only give the subprocess's stdout in real time. If you want to have both stderr and stdout in real time, you'll have to do something more complex with select.

    proc = subprocess.Popen(run_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
    while proc.poll() is None:
        line = proc.stdout.readline()
        print line
        log_file.write(line + '\n')
    # Might still be data on stdout at this point.  Grab any
    # remainder.
    for line in proc.stdout.read().split('\n'):
        print line
        log_file.write(line + '\n')
    # Do whatever you want with proc.stderr here...
    
    0 讨论(0)
提交回复
热议问题