Saving stdout from subprocess.Popen to file, plus writing more stuff to the file

后端 未结 4 1492
谎友^
谎友^ 2020-12-28 14:58

I\'m writing a python script that uses subprocess.Popen to execute two programs (from compiled C code) which each produce stdout. The script gets that output and saves it to

相关标签:
4条回答
  • 2020-12-28 15:24

    I say just keep it real simple. Pseudo code basic logic:

    write your start messages to logA
    execute A with output to logA
    write your in-between messages to logB
    execute B with output to logB
    write your final messages to logB
    when A & B finish, write content of logB to the end of logA
    delete logB
    
    0 讨论(0)
  • 2020-12-28 15:29

    As I understand it A program waits for B to do its thing and A exits only after B exits.

    If B can start without A running then you could start the processes in the reverse order:

    from os.path import join as pjoin
    from subprocess import Popen
    
    def run_async(cmd, logfile):
        print >>log, "calling", cmd
        p = Popen(cmd, stdout=logfile)
        print >>log, "started", cmd
        return p
    
    def runTest(path, flags, name):
        log = open(name, "w", 1)  # line-buffered
        print >>log, 'calling both processes'
        pb = run_async([pjoin(path, "executable_b_name")] + flags.split(), log)
        pa = run_async([pjoin(path, "executable_a_name")] + flags.split(), log)
        print >>log, 'started both processes'
        pb.wait()
        print >>log, 'process B ended'
        pa.wait()
        print >>log, 'process A ended'
        log.close()
    

    Note: calling log.flush() in the main processes has no effect on the file buffers in the child processes.

    If child processes use block-buffering for stdout then you could try to force them to flush sooner using pexpect, pty, or stdbuf (it assumes that the processes use line-buffering if run interactively or they use C stdio library for I/O).

    0 讨论(0)
  • 2020-12-28 15:31

    You could call .wait() on each Popen object in order to be sure that it's finished and then call log.flush(). Maybe something like this:

    def run(cmd, logfile):
        p = subprocess.Popen(cmd, shell=True, universal_newlines=True, stdout=logfile)
        ret_code = p.wait()
        logfile.flush()
        return ret_code
    

    If you need to interact with the Popen object in your outer function you could move the .wait() call to there instead.

    0 讨论(0)
  • 2020-12-28 15:40

    You need to wait until the process is finished before you continue. I've also converted the code to use a context manager, which is cleaner.

    def run(cmd, logfile):
        p = subprocess.Popen(cmd, shell=True, universal_newlines=True, stdout=logfile)
        p.wait()
        return p
    
    def runTest(path, flags, name):
        with open(name, "w") as log:
            print >> log, "Calling executable A"
            a_ret = run(path + "executable_a_name" + flags, log)
            print >> log, "Calling executable B"
            b_ret = run(path + "executable_b_name" + flags, log)
            print >> log, "More stuff"
    
    0 讨论(0)
提交回复
热议问题