问题
I wrote kind of a test suite which is heavily file intensive. After some time (2h) I get an IOError: [Errno 24] Too many open files: '/tmp/tmpxsqYPm'
. I double checked all file handles whether I close them again. But the error still exists.
I tried to figure out the number of allowed file descriptors using resource.RLIMIT_NOFILE
and the number of currently opened file desciptors:
def get_open_fds():
fds = []
for fd in range(3,resource.RLIMIT_NOFILE):
try:
flags = fcntl.fcntl(fd, fcntl.F_GETFD)
except IOError:
continue
fds.append(fd)
return fds
So if I run the following test:
print get_open_fds()
for i in range(0,100):
f = open("/tmp/test_%i" % i, "w")
f.write("test")
print get_open_fds()
I get this output:
[]
/tmp/test_0
[3]
/tmp/test_1
[4]
/tmp/test_2
[3]
/tmp/test_3
[4]
/tmp/test_4
[3]
/tmp/test_5
[4] ...
That's strange, I expected an increasing number of opened file descriptors. Is my script correct?
I'm using python's logger and subprocess. Could that be the reason for my fd leak?
Thanks, Daniel
回答1:
Your test script overwrites f
each iteration, which means that the file will get closed each time. Both logging to files and subprocess
with pipes use up descriptors, which can lead to exhaustion.
回答2:
The corrected code is:
import resource
import fcntl
import os
def get_open_fds():
fds = []
soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE)
for fd in range(0, soft):
try:
flags = fcntl.fcntl(fd, fcntl.F_GETFD)
except IOError:
continue
fds.append(fd)
return fds
def get_file_names_from_file_number(fds):
names = []
for fd in fds:
names.append(os.readlink('/proc/self/fd/%d' % fd))
return names
fds = get_open_fds()
print get_file_names_from_file_number(fds)
回答3:
resource.RLIMIT_NOFILE is indeed 7, but that's an index into resource.getrlimit(), not the limit itself... resource.getrlimit(resource.RLIMIT_NOFILE) is what you want your top range() to be
来源:https://stackoverflow.com/questions/4386482/too-many-open-files-in-python