blocks - send input to python subprocess pipeline

前端 未结 11 1325
轻奢々
轻奢々 2021-01-30 09:22

I\'m testing subprocesses pipelines with python. I\'m aware that I can do what the programs below do in python directly, but that\'s not the point. I just want to test the pipel

相关标签:
11条回答
  • 2021-01-30 09:31

    In one of the comments above, I challenged nosklo to either post some code to back up his assertions about select.select or to upvote my responses he had previously down-voted. He responded with the following code:

    from subprocess import Popen, PIPE
    import select
    
    p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
    p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE, close_fds=True)
    
    data_to_write = 100000 * 'hello world\n'
    to_read = [p2.stdout]
    to_write = [p1.stdin]
    b = [] # create buffer
    written = 0
    
    
    while to_read or to_write:
        read_now, write_now, xlist = select.select(to_read, to_write, [])
        if read_now:
            data = p2.stdout.read(1024)
            if not data:
                p2.stdout.close()
                to_read = []
            else:
                b.append(data)
    
        if write_now:
            if written < len(data_to_write):
                part = data_to_write[written:written+1024]
                written += len(part)
                p1.stdin.write(part); p1.stdin.flush()
            else:
                p1.stdin.close()
                to_write = []
    
    print b
    

    One problem with this script is that it second-guesses the size/nature of the system pipe buffers. The script would experience fewer failures if it could remove magic numbers like 1024.

    The big problem is that this script code only works consistently with the right combination of data input and external programs. grep and cut both work with lines, and so their internal buffers behave a bit differently. If we use a more generic command like "cat", and write smaller bits of data into the pipe, the fatal race condition will pop up more often:

    from subprocess import Popen, PIPE
    import select
    import time
    
    p1 = Popen(["cat"], stdin=PIPE, stdout=PIPE)
    p2 = Popen(["cat"], stdin=p1.stdout, stdout=PIPE, close_fds=True)
    
    data_to_write = 'hello world\n'
    to_read = [p2.stdout]
    to_write = [p1.stdin]
    b = [] # create buffer
    written = 0
    
    
    while to_read or to_write:
        time.sleep(1)
        read_now, write_now, xlist = select.select(to_read, to_write, [])
        if read_now:
            print 'I am reading now!'
            data = p2.stdout.read(1024)
            if not data:
                p1.stdout.close()
                to_read = []
            else:
                b.append(data)
    
        if write_now:
            print 'I am writing now!'
            if written < len(data_to_write):
                part = data_to_write[written:written+1024]
                written += len(part)
                p1.stdin.write(part); p1.stdin.flush()
            else:
                print 'closing file'
                p1.stdin.close()
                to_write = []
    
    print b
    

    In this case, two different results will manifest:

    write, write, close file, read -> success
    write, read -> hang
    

    So again, I challenge nosklo to either post code showing the use of select.select to handle arbitrary input and pipe buffering from a single thread, or to upvote my responses.

    Bottom line: don't try to manipulate both ends of a pipe from a single thread. It's just not worth it. See pipeline for a nice low-level example of how to do this correctly.

    0 讨论(0)
  • 2021-01-30 09:33

    Nosklo's offered solution will quickly break if too much data is written to the receiving end of the pipe:

    
    from subprocess import Popen, PIPE
    
    p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
    p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE, close_fds=True)
    p1.stdin.write('Hello World\n' * 20000)
    p1.stdin.close()
    result = p2.stdout.read() 
    assert result == "Hello Worl\n"
    

    If this script doesn't hang on your machine, just increase "20000" to something that exceeds the size of your operating system's pipe buffers.

    This is because the operating system is buffering the input to "grep", but once that buffer is full, the p1.stdin.write call will block until something reads from p2.stdout. In toy scenarios, you can get way with writing to/reading from a pipe in the same process, but in normal usage, it is necessary to write from one thread/process and read from a separate thread/process. This is true for subprocess.popen, os.pipe, os.popen*, etc.

    Another twist is that sometimes you want to keep feeding the pipe with items generated from earlier output of the same pipe. The solution is to make both the pipe feeder and the pipe reader asynchronous to the man program, and implement two queues: one between the main program and the pipe feeder and one between the main program and the pipe reader. PythonInteract is an example of that.

    Subprocess is a nice convenience model, but because it hides the details of the os.popen and os.fork calls it does under the hood, it can sometimes be more difficult to deal with than the lower-level calls it utilizes. For this reason, subprocess is not a good way to learn about how inter-process pipes really work.

    0 讨论(0)
  • 2021-01-30 09:39

    Responding to nosklo's assertion (see other comments to this question) that it can't be done without close_fds=True:

    close_fds=True is only necessary if you've left other file descriptors open. When opening multiple child processes, it's always good to keep track of open files that might get inherited, and to explicitly close any that aren't needed:

    from subprocess import Popen, PIPE
    
    p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
    p1.stdin.write('Hello World\n')
    p1.stdin.close()
    p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
    result = p2.stdout.read() 
    assert result == "Hello Worl\n"
    

    close_fds defaults to False because subprocess prefers to trust the calling program to know what it's doing with open file descriptors, and just provide the caller with an easy option to close them all if that's what it wants to do.

    But the real issue is that pipe buffers will bite you for all but toy examples. As I have said in my other answers to this question, the rule of thumb is to not have your reader and your writer open in the same process/thread. Anyone who wants to use the subprocess module for two-way communication would be well-served to study os.pipe and os.fork, first. They're actually not that hard to use if you have a good example to look at.

    0 讨论(0)
  • 2021-01-30 09:39

    I think you may be examining the wrong problem. Certainly as Aaron says if you try to be both a producer to the beginning of a pipeline, and a consumer of the end of the pipeline, it is easy to get into a deadlock situation. This is the problem that communicate() solves.

    communicate() isn't exactly correct for you since stdin and stdout are on different subprocess objects; but if you take a look at the implementation in subprocess.py you'll see that it does exactly what Aaron suggested.

    Once you see that communicate both reads and writes, you'll see that in your second try communicate() competes with p2 for the output of p1:

    p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
    p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
    # ...
    p1.communicate('data\n')       # reads from p1.stdout, as does p2
    

    I am running on win32, which definitely has different i/o and buffering characteristics, but this works for me:

    p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
    p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
    t = threading.Thread(target=get_output, args=(p2,)) 
    t.start()
    p1.stdin.write('hello world\n' * 100000)
    p1.stdin.close()
    t.join()
    

    I tuned the input size to produce a deadlock when using a naive unthreaded p2.read()

    You might also try buffering into a file, eg

    fd, _ = tempfile.mkstemp()
    os.write(fd, 'hello world\r\n' * 100000)
    os.lseek(fd, 0, os.SEEK_SET)
    p1 = Popen(["grep", "-v", "not"], stdin=fd, stdout=PIPE)
    p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
    print p2.stdout.read()
    

    That also works for me without deadlocks.

    0 讨论(0)
  • 2021-01-30 09:42

    What about using a SpooledTemporaryFile ? This bypasses (but perhaps doesn't solve) the issue:

    http://docs.python.org/library/tempfile.html#tempfile.SpooledTemporaryFile

    You can write to it like a file, but it's actually a memory block.

    Or am I totally misunderstanding...

    0 讨论(0)
  • 2021-01-30 09:43

    Working with large files

    Two principles need to be applied uniformly when working with large files in Python.

    1. Since any IO routine can block, we must keep each stage of the pipeline in a different thread or process. We use threads in this example, but subprocesses would let you avoid the GIL.
    2. We must use incremental reads and writes so that we don't wait for EOF before starting to make progress.

    An alternative is to use nonblocking IO, though this is cumbersome in standard Python. See gevent for a lightweight threading library that implements the synchronous IO API using nonblocking primitives.

    Example code

    We'll construct a silly pipeline that is roughly

    {cat /usr/share/dict/words} | grep -v not              \
        | {upcase, filtered tee to stderr} | cut -c 1-10   \
        | {translate 'E' to '3'} | grep K | grep Z | {downcase}
    

    where each stage in braces {} is implemented in Python while the others use standard external programs. TL;DR: See this gist.

    We start with the expected imports.

    #!/usr/bin/env python
    from subprocess import Popen, PIPE
    import sys, threading
    

    Python stages of the pipeline

    All but the last Python-implemented stage of the pipeline needs to go in a thread so that it's IO does not block the others. These could instead run in Python subprocesses if you wanted them to actually run in parallel (avoid the GIL).

    def writer(output):
        for line in open('/usr/share/dict/words'):
            output.write(line)
        output.close()
    def filter(input, output):
        for line in input:
            if 'k' in line and 'z' in line: # Selective 'tee'
                sys.stderr.write('### ' + line)
            output.write(line.upper())
        output.close()
    def leeter(input, output):
        for line in input:
            output.write(line.replace('E', '3'))
        output.close()
    

    Each of these needs to be put in its own thread, which we'll do using this convenience function.

    def spawn(func, **kwargs):
        t = threading.Thread(target=func, kwargs=kwargs)
        t.start()
        return t
    

    Create the pipeline

    Create the external stages using Popen and the Python stages using spawn. The argument bufsize=-1 says to use the system default buffering (usually 4 kiB). This is generally faster than the default (unbuffered) or line buffering, but you'll want line buffering if you want to visually monitor the output without lags.

    grepv   = Popen(['grep','-v','not'], stdin=PIPE, stdout=PIPE, bufsize=-1)
    cut     = Popen(['cut','-c','1-10'], stdin=PIPE, stdout=PIPE, bufsize=-1)
    grepk = Popen(['grep', 'K'], stdin=PIPE, stdout=PIPE, bufsize=-1)
    grepz = Popen(['grep', 'Z'], stdin=grepk.stdout, stdout=PIPE, bufsize=-1)
    
    twriter = spawn(writer, output=grepv.stdin)
    tfilter = spawn(filter, input=grepv.stdout, output=cut.stdin)
    tleeter = spawn(leeter, input=cut.stdout, output=grepk.stdin)
    

    Drive the pipeline

    Assembled as above, all the buffers in the pipeline will fill up, but since nobody is reading from the end (grepz.stdout), they will all block. We could read the entire thing in one call to grepz.stdout.read(), but that would use a lot of memory for large files. Instead, we read incrementally.

    for line in grepz.stdout:
        sys.stdout.write(line.lower())
    

    The threads and processes clean up once they reach EOF. We can explicitly clean up using

    for t in [twriter, tfilter, tleeter]: t.join()
    for p in [grepv, cut, grepk, grepz]: p.wait()
    

    Python-2.6 and earlier

    Internally, subprocess.Popen calls fork, configures the pipe file descriptors, and calls exec. The child process from fork has copies of all file descriptors in the parent process, and both copies will need to be closed before the corresponding reader will get EOF. This can be fixed by manually closing the pipes (either by close_fds=True or a suitable preexec_fn argument to subprocess.Popen) or by setting the FD_CLOEXEC flag to have exec automatically close the file descriptor. This flag is set automatically in Python-2.7 and later, see issue12786. We can get the Python-2.7 behavior in earlier versions of Python by calling

    p._set_cloexec_flags(p.stdin)
    

    before passing p.stdin as an argument to a subsequent subprocess.Popen.

    0 讨论(0)
提交回复
热议问题