问题
I have a file open for writing, and a process running for days -- something is written into the file in relatively random moments. My understanding is -- until I do file.close() -- there is a chance nothing is really saved to disk. Is that true?
What if the system crashes when the main process is not finished yet? Is there a way to do kind of commit once every... say -- 10 minutes (and I call this commit myself -- no need to run timer)? Is file.close() and open(file,'a') the only way, or there are better alternatives?
回答1:
You should be able to use file.flush()
to do this.
回答2:
If you don't want to kill the current process to add f.flush()
(it sounds like it's been running for days already?), you should be OK. If you see the file you are writing to getting bigger, you will not lose that data...
From Python docs:
write(str) Write a string to the file. There is no return value. Due to buffering, the string may not actually show up in the file until the flush() or close() method is called.
It sounds like Python's buffering system will automatically flush file objects, but it is not guaranteed when that happens.
回答3:
To make sure that you're data is written to disk, use file.flush()
followed by os.fsync(file.fileno())
.
回答4:
As has already been stated use the .flush() method to force the write out of the buffer, but avoid using a lot of calls to flush as this can actually slow your writing down (if the application relies on fast writes) as you'll be forcing your filesystem to write changes that are smaller than it's buffer size which can bring you to your knees. :)
来源:https://stackoverflow.com/questions/608316/is-there-commit-analog-in-python-for-writing-into-a-file