问题
I want to redirect the print to a .txt file using python. I have a \'for\' loop, which will \'print\' the output for each of my .bam file while I want to redirect ALL these output to one file. So I tried to put
f = open(\'output.txt\',\'w\'); sys.stdout = f
at the beginning of my script. However I get nothing in the .txt file. My script is:
#!/usr/bin/python
import os,sys
import subprocess
import glob
from os import path
f = open(\'output.txt\',\'w\')
sys.stdout = f
path= \'/home/xug/nearline/bamfiles\'
bamfiles = glob.glob(path + \'/*.bam\')
for bamfile in bamfiles:
filename = bamfile.split(\'/\')[-1]
print \'Filename:\', filename
samtoolsin = subprocess.Popen([\"/share/bin/samtools/samtools\",\"view\",bamfile],
stdout=subprocess.PIPE,bufsize=1)
linelist= samtoolsin.stdout.readlines()
print \'Readlines finished!\'
........print....
........print....
So what\'s the problem? Any other way besides this sys.stdout?
I need my result look like:
Filename: ERR001268.bam
Readlines finished!
Mean: 233
SD: 10
Interval is: (213, 252)
回答1:
The most obvious way to do this would be to print to a file object:
with open('out.txt', 'w') as f:
print >> f, 'Filename:', filename # Python 2.x
print('Filename:', filename, file=f) # Python 3.x
However, redirecting stdout also works for me. It is probably fine for a one-off script such as this:
import sys
orig_stdout = sys.stdout
f = open('out.txt', 'w')
sys.stdout = f
for i in range(2):
print 'i = ', i
sys.stdout = orig_stdout
f.close()
Redirecting externally from the shell itself is another good option:
./script.py > out.txt
Other questions:
What is the first filename in your script? I don't see it initialized.
My first guess is that glob doesn't find any bamfiles, and therefore the for loop doesn't run. Check that the folder exists, and print out bamfiles in your script.
Also, use os.path.join and os.path.basename to manipulate paths and filenames.
回答2:
You can redirect print with the >>
operator.
f = open(filename,'w')
print >>f, 'whatever' # Python 2.x
print('whatever', file=f) # Python 3.x
In most cases, you're better off just writing to the file normally.
f.write('whatever')
or, if you have several items you want to write with spaces between, like print
:
f.write(' '.join(('whatever', str(var2), 'etc')))
回答3:
Python 2 or Python 3 API reference:
print(*objects, sep=' ', end='\n', file=sys.stdout, flush=False)
The file argument must be an object with a
write(string)
method; if it is not present orNone
, sys.stdout will be used. Since printed arguments are converted to text strings,print()
cannot be used with binary mode file objects. For these, usefile.write(...)
instead.
Since file object normally contains write()
method, all you need to do is to pass a file object into its argument.
Write/Overwrite to File
with open('file.txt', 'w') as f:
print('hello world', file=f)
Write/Append to File
with open('file.txt', 'a') as f:
print('hello world', file=f)
回答4:
This works perfectly:
import sys
sys.stdout=open("test.txt","w")
print ("hello")
sys.stdout.close()
Now the hello will be written to the test.txt file. Make sure to close the stdout
with a close
, without it the content will not be save in the file
回答5:
Don't use print
, use logging
You can change sys.stdout
to point to a file, but this is a pretty clunky and inflexible way to handle this problem. Instead of using print
, use the logging module.
With logging
, you can print just like you would to stdout
, or you can also write the output to a file. You can even use the different message levels (critical
, error
, warning
, info
, debug
) to, for example, only print major issues to the console, but still log minor code actions to a file.
A simple example
Import logging
, get the logger
, and set the processing level:
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG) # process everything, even if everything isn't printed
If you want to print to stdout:
ch = logging.StreamHandler()
ch.setLevel(logging.INFO) # or any other level
logger.addHandler(ch)
If you want to also write to a file (if you only want to write to a file skip the last section):
fh = logging.FileHandler('myLog.log')
fh.setLevel(logging.DEBUG) # or any level you want
logger.addHandler(fh)
Then, wherever you would use print
use one of the logger
methods:
# print(foo)
logger.debug(foo)
# print('finishing processing')
logger.info('finishing processing')
# print('Something may be wrong')
logger.warning('Something may be wrong')
# print('Something is going really bad')
logger.error('Something is going really bad')
To learn more about using more advanced logging
features, read the excellent logging tutorial in the Python docs.
回答6:
The easiest solution isn't through python; its through the shell. From the first line of your file (#!/usr/bin/python
) I'm guessing you're on a UNIX system. Just use print
statements like you normally would, and don't open the file at all in your script. When you go to run the file, instead of
./script.py
to run the file, use
./script.py > <filename>
where you replace <filename>
with the name of the file you want the output to go in to. The >
token tells (most) shells to set stdout to the file described by the following token.
One important thing that needs to be mentioned here is that "script.py" needs to be made executable for ./script.py
to run.
So before running ./script.py
,execute this command
chmod a+x script.py
(make the script executable for all users)
回答7:
You may not like this answer, but I think it's the RIGHT one. Don't change your stdout destination unless it's absolutely necessary (maybe you're using a library that only outputs to stdout??? clearly not the case here).
I think as a good habit you should prepare your data ahead of time as a string, then open your file and write the whole thing at once. This is because input/output operations are the longer you have a file handle open, the more likely an error is to occur with this file (file lock error, i/o error, etc). Just doing it all in one operation leaves no question for when it might have gone wrong.
Here's an example:
out_lines = []
for bamfile in bamfiles:
filename = bamfile.split('/')[-1]
out_lines.append('Filename: %s' % filename)
samtoolsin = subprocess.Popen(["/share/bin/samtools/samtools","view",bamfile],
stdout=subprocess.PIPE,bufsize=1)
linelist= samtoolsin.stdout.readlines()
print 'Readlines finished!'
out_lines.extend(linelist)
out_lines.append('\n')
And then when you're all done collecting your "data lines" one line per list item, you can join them with some '\n'
characters to make the whole thing outputtable; maybe even wrap your output statement in a with
block, for additional safety (will automatically close your output handle even if something goes wrong):
out_string = '\n'.join(out_lines)
out_filename = 'myfile.txt'
with open(out_filename, 'w') as outf:
outf.write(out_string)
print "YAY MY STDOUT IS UNTAINTED!!!"
However if you have lots of data to write, you could write it one piece at a time. I don't think it's relevant to your application but here's the alternative:
out_filename = 'myfile.txt'
outf = open(out_filename, 'w')
for bamfile in bamfiles:
filename = bamfile.split('/')[-1]
outf.write('Filename: %s' % filename)
samtoolsin = subprocess.Popen(["/share/bin/samtools/samtools","view",bamfile],
stdout=subprocess.PIPE,bufsize=1)
mydata = samtoolsin.stdout.read()
outf.write(mydata)
outf.close()
回答8:
if ur using linux i suggest u to use tee
command the implementation goes like this python python_file.py |tee any_file_name.txt
if u dont want to change anything in the code ,i think this might be the best possible solution ,u can also implement logger but u need do some changes in the code.
回答9:
If redirecting stdout
works for your problem, Gringo Suave's answer is a good demonstration for how to do it.
To make it even easier, I made a version utilizing contextmanagers for a succinct generalized calling syntax using the with
statement:
from contextlib import contextmanager
import sys
@contextmanager
def redirected_stdout(outstream):
orig_stdout = sys.stdout
try:
sys.stdout = outstream
yield
finally:
sys.stdout = orig_stdout
To use it, you just do the following (derived from Suave's example):
with open('out.txt', 'w') as outfile:
with redirected_stdout(outfile):
for i in range(2):
print('i =', i)
It's useful for selectively redirecting print
when a module uses it in a way you don't like. The only disadvantage (and this is the dealbreaker for many situations) is that it doesn't work if one wants multiple threads with different values of stdout
, but that requires a better, more generalized method: indirect module access. You can see implementations of that in other answers to this question.
回答10:
Changing the value of sys.stdout does change the destination of all calls to print. If you use an alternative way to change the destination of print, you will get the same result.
Your bug is somewhere else:
- it could be in the code you removed for your question (where does filename come from for the call to open?)
- it could also be that you are not waiting for data to be flushed: if you print on a terminal, data is flushed after every new line, but if you print to a file, it's only flushed when the stdout buffer is full (4096 bytes on most systems).
回答11:
Something to extend print function for loops
x = 0
while x <=5:
x = x + 1
with open('outputEis.txt', 'a') as f:
print(x, file=f)
f.close()
来源:https://stackoverflow.com/questions/7152762/how-to-redirect-print-output-to-a-file-using-python