I writed simple code for test, how much files may be open in python script:
for i in xrange(2000):
fp = open(\'files/file_%d\' % i, \'w\')
fp.write(s
The append is needed so the garbage collector does not clean up and close the files
Since this is not a Python problem, do this:
for x in xrange(2000):
with open('files/file_%d' % x, 'r') as h:
print h.read()
The following is a very bad idea.
fps.append(h)
The number of open files is limited by the operating system. On linux you can type
ulimit -n
to see what the limit is. If you are root, you can type
ulimit -n 2048
now your program will run ok (as root) since you have lifted the limit to 2048 open files
Most likely because the operating system has a limit for the number of files that an application can have open.
I see same behavior on Windows when running your code. The limit exists from C runtime. You can use win32file to change the limit value:
import win32file
print win32file._getmaxstdio()
The above shall give you 512, which explains the failure at #509 (+stdin, stderr, stdout as others have already stated)
Execute the following and your code shall run fine:
win32file._setmaxstdio(2048)
Note that 2048 is the hard limit, though (hard limit of the underlying C Stdio). As a result, executing the _setmaxstdio with a value greater than 2048 fails for me.
To check change the limit of open file handles on Linux, you can use the Python module resource:
import resource
# the soft limit imposed by the current configuration
# the hard limit imposed by the operating system.
soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE)
print 'Soft limit is ', soft
# For the following line to run, you need to execute the Python script as root.
resource.setrlimit(resource.RLIMIT_NOFILE, (3000, hard))
On Windows, I do as Punit S suggested:
import platform
if platform.system() == 'Windows':
import win32file
win32file._setmaxstdio(2048)