I am running a Python script that is causing the above error. The unusual thing is this script is running on a different machine and is having no problems.
The difference is that on the machine that is causing the problems I am writing to an external hard drive. To make things even weirder this script has run on the problem machine and already written over 30,000 files.
Some relevant information (The code that is causing the error):
nPage = 0
while nPage != -1:
for d in data:
if len(d.contents) > 1:
if '<script' in str(d.contents):
l = str(d.contents[1])
start = l.find('http://')
end = l.find('>',start)
out = get_records.openURL(l[start:end])
print COUNT
with open('../results/'+str(COUNT)+'.html','w') as f:
f.write(out)
COUNT += 1
nPage = nextPage(mOut,False)
The directory I'm writing to:
10:32@lorax:~/econ/estc/bin$ ll ../
total 56
drwxr-xr-x 3 boincuser boincuser 4096 2011-07-31 14:29 ./
drwxr-xr-x 3 boincuser boincuser 4096 2011-07-31 14:20 ../
drwxr-xr-x 2 boincuser boincuser 4096 2011-08-09 10:38 bin/
lrwxrwxrwx 1 boincuser boincuser 47 2011-07-31 14:21 results -> /media/cavalry/server_backup/econ/estc/results//
-rw-r--r-- 1 boincuser boincuser 44759 2011-08-09 10:32 test.html
Proof there is enough space:
10:38@lorax:~/econ/estc/bin$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.0G 5.3G 3.3G 63% /
none 495M 348K 495M 1% /dev
none 500M 164K 500M 1% /dev/shm
none 500M 340K 500M 1% /var/run
none 500M 0 500M 0% /var/lock
none 9.0G 5.3G 3.3G 63% /var/lib/ureadahead/debugfs
/dev/sdc10 466G 223G 244G 48% /media/cavalry
Some things I have tried:
- Changing the path of the write to the direct location instead of going through the link
- Rebooting the machine
- Unmounting and re-mounting the drive
The ENOSPC
("No space left on device") error will be triggered in any situation in which the data or the metadata associated with an I/O operation can't be written down anywhere because of lack of space. This doesn't always mean disk space – it could mean physical disk space, logical space (e.g. maximum file length), space in a certain data structure or address space. For example you can get it if there isn't space in the directory table (vfat) or there aren't any inodes left. It roughly means “I can't find where to write this down”.
Particularly in Python, this can happen on any write I/O operation. It can happen during f.write
, but it can also happen on open
, on f.flush
and even on f.close
. Where it happened provides a vital clue for the reason that it did – if it happened on open
there wasn't enough space to write the metadata for the entry, if it happened during f.write
, f.flush
or f.close
there wasn't enough disk space left or you've exceeded the maximum file size.
If the filesystem in the given directory is vfat
you'd hit the maximum file limit at about the same time that you did. The limit is supposed to be 2^16 directory entries, but if I recall correctly some other factors can affect it (e.g. some files require more than one entry).
It would be best to avoid creating so many files in a directory. Few filesystems handle so many directory entries with ease. Unless you're certain that your filesystem deals well with many files in a directory, you can consider another strategy (e.g. create more directories).
P.S. Also do not trust the remaining disk space – some file systems reserve some space for root and others miscalculate the free space and give you a number that just isn't true.
Try to delete the temp files
cd /tmp/
rm -r *
It turns out the best solution for me here was to just reformat the drive. Once reformatted all these problems were no longer problems.
In my case, when I run df -i it shows me that my number of inodes are full and then I have to delete some of the small files or folder. Otherwise it will not allow us to create files or folders once inodes get full.
All you have to do is delete files or folder that has not taken up full space but is responsible for filling inodes.
来源:https://stackoverflow.com/questions/6998083/python-causing-ioerror-errno-28-no-space-left-on-device-results-32766-h