I have a script that generates tens of thousands of inserts into a postgres db through a custom ORM. As you can imagine, it\'s quite slow. This is used for development purpose
For inserts that number in the hundreds to thousands, batch them:
begin;
insert1 ...
insert2 ...
...
insert10k ...
commit;
For inserts in the millions use copy:
COPY test (ts) FROM stdin;
2010-11-29 22:32:01.383741-07
2010-11-29 22:32:01.737722-07
... 1Million rows
\.
Make sure any col used as an fk in another table is indexed if it's more than trivial in size in the other table.