For example:
A script1.py gets an infix expression from the user and converts it to a postfix expression and returns it or prints it to stdout
script2.py gets a
If I undestand your issue correctly, your two scripts each write out a prompt for input. For instance, they could both be something like this:
in_string = input("Enter something")
print(some_function(in_string))
Where some_function
is a function that has different output depending on the input string (which may be different in each script).
The issue is that the "Enter something"
prompt doesn't get displayed to the user correctly when the output of one script is being piped to another script. That's because the prompt is written to standard output, so the first script's prompt is piped to the second script, while the second script's prompt is displayed. That's misleading, since it's the first script that will (directly) receive input from the user. The prompt text may also mess up the data being passed between the two scripts.
There's no perfect solution to this issue. One partial solution is to write the prompt to standard error, rather than standard output. This would let you see both prompts (though you'd only actually be able to respond to one of them). I don't think you can directly do that with input
, but print
can write to other file streams if you want: print("prompt", file=sys.stderr)
Another partial solution is to check if your input and output streams and skip printing the prompts if either one is not a "tty" (terminal). In Python, you can do sys.stdin.isatty()
. Many command line programs have a different "interactive mode" if they're connected directly to the user, rather than to a pipe or a file.
If piping the output around is a main feature of your program, you may not want to use prompts ever! Many standard Unix command-line programs (like cat
and grep
) don't have any interactive behavior at all. They require the user to pass command line arguments or set environment variables to control how they run. That lets them work as expected even when they don't have access to standard input and standard output.
For example if you have nginx running and script1.py:
import os
os.system("ps aux")
and script2.py
import os
os.system("grep nginx")
Then running:
python script1.py | script2.py
will be same as
ps aux | grep nginx
For completion's sake, and to offer an alternative to using the os
module:
The fileinput
module takes care of piping for you, and from running a simple test I believe it'll make it an easy implementation.
To enable your files to support piped input, simply do this:
import fileinput
with fileinput.input() as f_input: # This gets the piped data for you
for line in f_input:
# do stuff with line of piped data
all you'd have to do then is:
$ some_textfile.txt | ./myscript.py
Note that fileinput also enables data input for your scripts like so: $ ./myscript.py some_textfile.txt $ ./myscript.py < some_textfile.txt
This works with python's print output just as easily:
>test.py # This prints the contents of some_textfile.txt
with open('some_textfile.txt', 'r') as f:
for line in f:
print(line)
$ ./test.py | ./myscript.py
Of course, don't forget the hashbang #!/usr/bin/env python
at the top of your scripts for this way to work.
The recipe is featured in Beazley & Jones's Python Cookbook - I wholeheartedly recommend it.