I have a very simple Python 3 script:
f1 = open('a.txt', 'r')
print(f1.readlines())
f2 = open('b.txt', 'r')
print(f2.readlines())
f3 = open('c.txt', 'r')
print(f3.readlines())
f4 = open('d.txt', 'r')
print(f4.readlines())
f1.close()
f2.close()
f3.close()
f4.close()
But it always says:
IOError: [Errno 32] Broken pipe
I saw on the internet all the complicated ways to fix this, but I copied this code directly, so I think that there is something wrong with the code and not Python's SIGPIPE.
I am redirecting the output, so if the above script was named "open.py", then my command to run would be:
open.py | othercommand
I haven't reproduced the issue, but perhaps this method would solve it: (writing line by line to stdout
rather than using print
)
import sys
with open('a.txt', 'r') as f1:
for line in f1:
sys.stdout.write(line)
You could catch the broken pipe? This writes the file to stdout
line by line until the pipe is closed.
import sys, errno
try:
with open('a.txt', 'r') as f1:
for line in f1:
sys.stdout.write(line)
except IOError as e:
if e.errno == errno.EPIPE:
# Handle error
You also need to make sure that othercommand
is reading from the pipe before it gets too big - https://unix.stackexchange.com/questions/11946/how-big-is-the-pipe-buffer
To bring Alex L.'s helpful answer, akhan's helpful answer, and Blckknght's helpful answer together with some additional information:
Standard Unix signal
SIGPIPE
is sent to a process writing to a pipe when there's no process reading from the pipe (anymore).- This is not necessarily an error condition; some Unix utilities such as
head
by design stop reading prematurely from a pipe, once they've received enough data.
- This is not necessarily an error condition; some Unix utilities such as
By default - i.e., if the writing process does not explicitly trap
SIGPIPE
- the writing process is simply terminated, and its exit code is set to141
, which is calculated as128
(to signal termination by signal in general) +13
(SIGPIPE
's specific signal number).By design, however, Python itself traps
SIGPIPE
and translates it into a PythonIOError
instance witherrno
valueerrno.EPIPE
, so that a Python script can catch it, if it so chooses - see Alex L.'s answer for how to do that.If a Python script does not catch it, Python outputs error message
IOError: [Errno 32] Broken pipe
and terminates the script with exit code1
- this is the symptom the OP saw.In many cases this is more disruptive than helpful, so reverting to the default behavior is desirable:
Using the
signal
module allows just that, as stated in akhan's answer;signal.signal()
takes a signal to handle as the 1st argument and a handler as the 2nd; special handler valueSIG_DFL
represents the system's default behavior:from signal import signal, SIGPIPE, SIG_DFL signal(SIGPIPE, SIG_DFL)
A "Broken Pipe" error occurs when you try to write to a pipe that has been closed on the other end. Since the code you've shown doesn't involve any pipes directly, I suspect you're doing something outside of Python to redirect the standard output of the Python interpreter to somewhere else. This could happen if you're running a script like this:
python foo.py | someothercommand
The issue you have is that someothercommand
is exiting without reading everything available on its standard input. This causes your write (via print
) to fail at some point.
I was able to reproduce the error with the following command on a Linux system:
python -c 'for i in range(1000): print i' | less
If I close the less
pager without scrolling through all of its input (1000 lines), Python exits with the same IOError
you have reported.
I feel obliged to point out that the method using
signal(SIGPIPE, SIG_DFL)
is indeed dangerous (as already suggested by David Bennet in the comments) and in my case led to platform-dependent funny business when combined with multiprocessing.Manager
(because the standard library relies on BrokenPipeError being raised in several places). To make a long and painful story short, this is how I fixed it:
First, you need to catch the IOError
(Python 2) or BrokenPipeError
(Python 3). Depending on your program you can try to exit early at that point or just ignore the exception:
from errno import EPIPE
try:
broken_pipe_exception = BrokenPipeError
except NameError: # Python 2
broken_pipe_exception = IOError
try:
YOUR CODE GOES HERE
except broken_pipe_exception as exc:
if broken_pipe_exception == IOError:
if exc.errno != EPIPE:
raise
However, this isn't enough. Python 3 may still print a message like this:
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>
BrokenPipeError: [Errno 32] Broken pipe
Unfortunately getting rid of that message is not straightforward, but I finally found http://bugs.python.org/issue11380 where Robert Collins suggests this workaround that I turned into a decorator you can wrap your main function with (yes, that's some crazy indentation):
from functools import wraps
from sys import exit, stderr, stdout
from traceback import print_exc
def suppress_broken_pipe_msg(f):
@wraps(f)
def wrapper(*args, **kwargs):
try:
return f(*args, **kwargs)
except SystemExit:
raise
except:
print_exc()
exit(1)
finally:
try:
stdout.flush()
finally:
try:
stdout.close()
finally:
try:
stderr.flush()
finally:
stderr.close()
return wrapper
@suppress_broken_pipe_msg
def main():
YOUR CODE GOES HERE
This can also occur if the read end of the output from your script dies prematurely
ie open.py | otherCommand
if otherCommand exits and open.py tries to write to stdout
I had a bad gawk script that did this lovely to me.
I know this is not the "proper" way to do it, but if you are simply interested in getting rid of the error message, you could try this workaround:
python your_python_code.py 2> /dev/null | other_command
Closes should be done in reverse order of the opens.
来源:https://stackoverflow.com/questions/14207708/ioerror-errno-32-broken-pipe-python