I love that PostgreSQL is crash resistant, as I don\'t want to spend time fixing a database. However, I\'m sure there must be some things I can disable/modify so that i
22 minutes for 1 million rows doesn't seem that slow, particularly if you have lots of indexes.
How are you doing the inserts? I take it you're using batch inserts, not one-row-per-transaction.
Does PG support some kind of bulk loading, like reading from a text file or supplying a stream of CSV data to it? If so, you'd probably be best advised to use that.
Please post the code you're using to load the 1M records, and people will advise.
Please post:
EDIT: It seems the OP isn't interested in bulk-inserts, but is doing a performance test for many single-row inserts. I will assume that each insert is in its own transaction.
You should also increase checkpoint_segments
(e.g. to 32 or even higher) and most probably wal_buffers
as well
Edit:
if this is a bulk load, you should use COPY to insert the rows. It is much faster than plain INSERTs.
If you need to use INSERT, did you consider using batching (for JDBC) or multi-row inserts?