blocks - send input to python subprocess pipeline

前端 未结 11 1326
轻奢々
轻奢々 2021-01-30 09:22

I\'m testing subprocesses pipelines with python. I\'m aware that I can do what the programs below do in python directly, but that\'s not the point. I just want to test the pipel

相关标签:
11条回答
  • 2021-01-30 09:44

    I found out how to do it.

    It is not about threads, and not about select().

    When I run the first process (grep), it creates two low-level file descriptors, one for each pipe. Lets call those a and b.

    When I run the second process, b gets passed to cut sdtin. But there is a brain-dead default on Popen - close_fds=False.

    The effect of that is that cut also inherits a. So grep can't die even if I close a, because stdin is still open on cut's process (cut ignores it).

    The following code now runs perfectly.

    from subprocess import Popen, PIPE
    
    p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
    p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE, close_fds=True)
    p1.stdin.write('Hello World\n')
    p1.stdin.close()
    result = p2.stdout.read() 
    assert result == "Hello Worl\n"
    

    close_fds=True SHOULD BE THE DEFAULT on unix systems. On windows it closes all fds, so it prevents piping.

    EDIT:

    PS: For people with a similar problem reading this answer: As pooryorick said in a comment, that also could block if data written to p1.stdin is bigger than the buffers. In that case you should chunk the data into smaller pieces, and use select.select() to know when to read/write. The code in the question should give a hint on how to implement that.

    EDIT2: Found another solution, with more help from pooryorick - instead of using close_fds=True and close ALL fds, one could close the fds that belongs to the first process, when executing the second, and it will work. The closing must be done in the child so the preexec_fn function from Popen comes very handy to do just that. On executing p2 you can do:

    p2 = Popen(cmd2, stdin=p1.stdout, stdout=PIPE, stderr=devnull, preexec_fn=p1.stdin.close)
    
    0 讨论(0)
  • 2021-01-30 09:48

    You must do this in several threads. Otherwise, you'll end up in a situation where you can't send data: child p1 won't read your input since p2 doesn't read p1's output because you don't read p2's output.

    So you need a background thread that reads what p2 writes out. That will allow p2 to continue after writing some data to the pipe, so it can read the next line of input from p1 which again allows p1 to process the data which you send to it.

    Alternatively, you can send the data to p1 with a background thread and read the output from p2 in the main thread. But either side must be a thread.

    0 讨论(0)
  • 2021-01-30 09:52

    There are three main tricks to making pipes work as expected

    1. Make sure each end of the pipe is used in a different thread/process (some of the examples near the top suffer from this problem).

    2. explicitly close the unused end of the pipe in each process

    3. deal with buffering by either disabling it (Python -u option), using pty's, or simply filling up the buffer with something that won't affect the data, ( maybe '\n', but whatever fits).

    The examples in the Python "pipeline" module (I'm the author) fit your scenario exactly, and make the low-level steps fairly clear.

    http://pypi.python.org/pypi/pipeline/

    More recently, I used the subprocess module as part of a producer-processor-consumer-controller pattern:

    http://www.darkarchive.org/w/Pub/PythonInteract

    This example deals with buffered stdin without resorting to using a pty, and also illustrates which pipe ends should be closed where. I prefer processes to threading, but the principle is the same. Additionally, it illustrates synchronizing Queues to which feed the producer and collect output from the consumer, and how to shut them down cleanly (look out for the sentinels inserted into the queues). This pattern allows new input to be generated based on recent output, allowing for recursive discovery and processing.

    0 讨论(0)
  • 2021-01-30 09:52

    Here's an example of using Popen together with os.fork to accomplish the same thing. Instead of using close_fds it just closes the pipes at the right places. Much simpler than trying to use select.select, and takes full advantage of system pipe buffers.

    from subprocess import Popen, PIPE
    import os
    import sys
    
    p1 = Popen(["cat"], stdin=PIPE, stdout=PIPE)
    
    pid = os.fork()
    
    if pid: #parent
        p1.stdin.close()
        p2 = Popen(["cat"], stdin=p1.stdout, stdout=PIPE)
        data = p2.stdout.read()
        sys.stdout.write(data)
        p2.stdout.close()
    
    else: #child
        data_to_write = 'hello world\n' * 100000
        p1.stdin.write(data_to_write)
        p1.stdin.close()
    
    0 讨论(0)
  • 2021-01-30 09:56

    It's much simpler than you think!

    import sys
    from subprocess import Popen, PIPE
    
    # Pipe the command here. It will read from stdin.
    #   So cat a file, to stdin, like (cat myfile | ./this.py),
    #     or type on terminal and hit control+d when done, etc
    #   No need to handle this yourself, that's why we have shell's!
    p = Popen("grep -v not | cut -c 1-10", shell=True, stdout=PIPE)
    
    nextData = None
    while True:
        nextData = p.stdout.read()
        if nextData in (b'', ''):
            break
        sys.stdout.write ( nextData.decode('utf-8') )
    
    
    p.wait()
    

    This code is written for python 3.6, and works with python 2.7.

    Use it like:

    cat README.md  | python ./example.py
    

    or

    python example.py < README.md
    

    To pipe the contents of "README.md" to this program.

    But.. at this point, why not just use "cat" directly, and pipe the output like you want? like:

    cat filename | grep -v not | cut -c 1-10
    

    typed into the console will do the job as well. I personally would only use the code option if I was further processing the output, otherwise a shell script would be easier to maintain and be retained.

    You just, use the shell to do the piping for you. In one, out the other. That's what she'll are GREAT at doing, managing processes, and managing single-width chains of input and output. Some would call it a shell's best non-interactive feature..

    0 讨论(0)
提交回复
热议问题