I have a script that generates tens of thousands of inserts into a postgres db through a custom ORM. As you can imagine, it\'s quite slow. This is used for development purpose
Try to do as much as possible in one request!
insert into my_table (col1, col2)
values (
unnest(array[row_1_col_value_1, row_2_col_value_1, row3_col_value_1]),
unnest(array[row_1_col_value_2, row_2_col_value_2, row_3_col_value_2));
This resembles the suggestion of @a_horse_with_no_name. The advantage of using unnest
is: You can use query parameters that contain arrays!
insert into my_table (col1, col2)
values (unnest(:col_values_1), unnest(:col_values_2));
By collapsing three insert
statements into one, you save more than 50% of execution time.
And by using query parameters with 2000 values in a single Insert
, I get a speed factor of 150 in my application.