问题
I'm using an sqlite3 database to record data every second. The interface to it is provided by Flask-SQLAlchemy.
This can work fine for a couple of months, but eventually (as the .db file approaches 8 GB), an error prevents any more data from being written to the database:Failed to commit: (sqlite3.OperationalError) disk I/O error
The journal file does not seem to be the issue here - if I restart the application and use the pragma journal_mode=TRUNCATE
, the journal file is created but the disk I/O error
persists.
Here's the .dbinfo (obtained from sqlite3.exe):
database page size: 1024
write format: 1
read format: 1
reserved bytes: 0
file change counter: 5200490
database page count: 7927331
freelist page count: 0
schema cookie: 12
schema format: 4
default cache size: 0
autovacuum top root: 0
incremental vacuum: 0
text encoding: 1 (utf8)
user version: 0
application id: 0
software version: 3008011
number of tables: 6
number of indexes: 7
number of triggers: 0
number of views: 0
schema size: 5630
data version 2
However this worked:
- I made a copy of the .db file (call app.db and copy.db).
- I renamed app.db to orig.db
- I renamed copy.db to app.db (so effectively, I swapped it so that the copy becomes the app).
When I started my application again, it was able to write to the app.db file once more! So I could write to a copy I made of the database.
The drive is an SSD (Samsung 850 EVO mSATA)> I wonder if that's got something to do with it? Does anyone have any ideas on how I can prevent it from happening again?
EDIT: I've used the sqlite3.exe CLI to execute an INSERT INTO command manually, and this actually completed successfully (and wrote to the disk). However, when I re-ran my Flask-SQLAlchemy interface to write to it, it still came up with the disk I/O error.
回答1:
UPDATE:
A colleague pointed out that this might be related to another question: https://stackoverflow.com/a/49506243/3274353
I strongly suspect now that this is a filesystem issue - in my system, the database file is being updated constantly alongside some other files which are also being written to.
So to reduce the amount of fragmentation, I'm getting the database to pre-allocate some disk space now using the answer provided in aforementioned question: https://stackoverflow.com/a/49506243/3274353
Something like this:
CREATE TABLE t(x);
INSERT INTO t VALUES(zeroblob(500*1024*1024)); -- 500 MB
DROP TABLE t;
To know whether this needs to be done, I use a call to the freelist_count
pragma:
PRAGMA schema.freelist_count;
Return the number of unused pages in the database file.
来源:https://stackoverflow.com/questions/57523341/sqlite3-disk-i-o-error-encountered-after-a-while-but-worked-after-using-copy-of