How can I speed up update/replace operations in PostgreSQL?

后端 未结 6 750
迷失自我
迷失自我 2021-02-01 06:13

We have a rather specific application that uses PostgreSQL 8.3 as a storage backend (using Python and psycopg2). The operations we perform to the important tables are in the ma

相关标签:
6条回答
  • 2021-02-01 06:51

    Sounds like you'd see benefits from using WAL (Write Ahead Logging) with a UPS to cache your updates between disk writes.

    wal_buffers This setting decides the number of buffers WAL(Write ahead Log) can have. If your database has many write transactions, setting this value bit higher than default could result better usage of disk space. Experiment and decide. A good start would be around 32-64 corresponding to 256-512K memory.

    http://www.varlena.com/GeneralBits/Tidbits/perf.html

    0 讨论(0)
  • 2021-02-01 06:54

    For updates, you can lower your fillfactor for the tables and the indexes and that might help

    http://www.postgresql.org/docs/current/static/sql-createtable.html

    http://www.postgresql.org/docs/current/static/sql-createindex.html

    0 讨论(0)
  • 2021-02-01 07:00

    I had a similar situation a few months ago and ended up getting the largest speed boost from a tuned chunk/transaction size. You may also want to check the log for a checkpoint warning during the test and tune appropriately.

    0 讨论(0)
  • 2021-02-01 07:04

    In your insert_or_replace. try this:

    WHERE EXISTS(SELECT 1 FROM item WHERE key=NEW.key LIMIT 1)
    

    instead of

    WHERE EXISTS(SELECT 1 FROM item WHERE key=NEW.key)
    

    As noted in comments, that will probably do nothing. All I have to add, then, is that you can always speed up INSERT/UPDATE performance by removing indexes. This will likely not be something you want to do unless you find your table is overindexed, but that should at least be checked out.

    0 讨论(0)
  • 2021-02-01 07:07

    In Oracle, locking the table would definitely help. You might want to try that with PostgreSQL, too.

    0 讨论(0)
  • 2021-02-01 07:10

    The usual way I do these things in pg is: load raw data matching target table into temp table (no constraints) using copy, merge(the fun part), profit.

    I wrote a merge_by_key function specifically for these situations:

    http://mbk.projects.postgresql.org/

    The docs aren't terribly friendly, but I'd suggest giving it a good look.

    0 讨论(0)
提交回复
热议问题