Postgres insert optimization

前端 未结 7 1622
一个人的身影
一个人的身影 2021-02-04 18:25

I have a script that generates tens of thousands of inserts into a postgres db through a custom ORM. As you can imagine, it\'s quite slow. This is used for development purpose

7条回答
  •  伪装坚强ぢ
    2021-02-04 18:47

    The fastest way to insert data would be the COPY command. But that requires a flat file as its input. I guess generating a flat file is not an option.

    Don't commit too often, especially do not run this with autocommit enabled. "Tens of thousands" sounds like a single commit at the end would be just right.

    If you can convice your ORM to make use of Postgres' multi-row insert that would speed up things as well

    This is an example of a multi-row insert:

    insert into my_table (col1, col2) 
    values 
    (row_1_col_value1, row_1_col_value_2), 
    (row_2_col_value1, row_2_col_value_2), 
    (row_3_col_value1, row_3_col_value_2)
    

    If you can't generate the above syntax and you are using Java make sure you are using batched statements instead of single statement inserts (maybe other DB layers allow something similar)

    Edit:

    jmz' post inspired me to add something:

    You might also see an improvement when you increase wal_buffers to some bigger value (e.g. 8MB) and checkpoint_segments (e.g. 16)

提交回复
热议问题