Postgres UPDATE with ORDER BY, how to do it?

纵然是瞬间 提交于 2019-12-21 07:55:22

问题


I need to do a Postgres update on a collection of records & I'm trying to prevent a deadlock which appeared in the stress tests.

The typical resolution to this is to update records in a certain order, by ID for example - but it seems that Postgres doesn't allow ORDER BY for UPDATE.

Assuming I need to do an update, for example:

UPDATE BALANCES WHERE ID IN (SELECT ID FROM some_function() ORDER BY ID);

results in deadlocks when you run 200 queries concurrently. What to do?

I'm looking for a general solution, not case-specific workarounds like in UPDATE with ORDER BY

It feels that there must be a better solution than writing a cursor function. Also, if there's no better way, how would that cursor function optimally look like? Update record-by-record


回答1:


As far as I know, there's no way to accomplish this directly through the UPDATE statement; the only way to guarantee lock order is to explicitly acquire locks with a SELECT ... ORDER BY ID FOR UPDATE, e.g.:

UPDATE Balances
SET Balance = 0
WHERE ID IN (
  SELECT ID FROM Balances
  WHERE ID IN (SELECT ID FROM some_function())
  ORDER BY ID
  FOR UPDATE
)

This has the downside of repeating the ID index lookup on the Balances table. In your simple example, you can avoid this overhead by fetching the physical row address (represented by the ctid system column) during the locking query, and using that to drive the UPDATE:

UPDATE Balances
SET Balance = 0
WHERE ctid = ANY(ARRAY(
  SELECT ctid FROM Balances
  WHERE ID IN (SELECT ID FROM some_function())
  ORDER BY ID
  FOR UPDATE
))

(Be careful when using ctids, as the values are transient. We're safe here, as the locks will block any changes.)

Unfortunately, the planner will only utilise the ctid in a narrow set of cases (you can tell if it's working by looking for a "Tid Scan" node in the EXPLAIN output). To handle more complicated queries within a single UPDATE statement, e.g. if your new balance was being returned by some_function() alongside the ID, you'll need to fall back to the ID-based lookup:

UPDATE Balances
SET Balance = Locks.NewBalance
FROM (
  SELECT Balances.ID, some_function.NewBalance
  FROM Balances
  JOIN some_function() ON some_function.ID = Balances.ID
  ORDER BY Balances.ID
  FOR UPDATE
) Locks
WHERE Balances.ID = Locks.ID

If the performance overhead is an issue, you'd need to resort to using a cursor, which would look something like this:

DO $$
DECLARE
  c CURSOR FOR
    SELECT Balances.ID, some_function.NewBalance
    FROM Balances
    JOIN some_function() ON some_function.ID = Balances.ID
    ORDER BY Balances.ID
    FOR UPDATE;
BEGIN
  FOR row IN c LOOP
    UPDATE Balances
    SET Balance = row.NewBalance
    WHERE CURRENT OF c;
  END LOOP;
END
$$



回答2:


In general, concurrency is difficult. Especially with 200 statements (i'm assuming you don't only query = SELECT) or even transactions (actually every single statement issued is wrapped into a transaction if it's not in a transaction already).

The general solution concepts are (a combination of) these:

  1. To be aware that deadlocks can happen, catch them in the application, check the Error Codes for class 40 or 40P01 and retry the transaction.

  2. Reserve locks. Use SELECT ... FOR UPDATE. Evade explicit locks as long as possible. Locks will force other transactions to wait for lock release, which harms concurrency, but can prevent transactions running into deadlocks. Check the example for deadlocks in chapter 13. Especially the one in which transaction A waits for B and B waits for A (the bank account thingy).

  3. Choose a different Isolation Level, for example a weaker one like READ COMMITED, if possible. Be aware of LOST UPDATEs in READ COMMITED mode. Prevent them with REPEATABLE READ.

Write your statements with locks in the same order in EVERY transaction, for example by table name alphabetically.

LOCK / USE A  -- Transaction 1 
LOCK / USE B  -- Transaction 1
LOCK / USE C  -- Transaction 1
-- D not used -- Transaction 1

-- A not used -- Transaction 2
LOCK / USE B  -- Transaction 2
-- C not used -- Transaction 2
LOCK / USE D  -- Transaction 2

with the general locking order A B C D. This way, the transactions can interleave in any relative order and still have a good chance not to deadlock (depending on your statements you may have other serialization issues though). The statements of the transactions will run in the order specified by them, but it can be that transaction 1 runs their first 2, then xact 2 runs the first one, then 1 finishes and finally xact 2 finishes.

Also, you should realise that a statement involving multiple rows is not executed atomically in a concurrent situation. In other words, if you have two statements A and B involving multiple rows, then they can be executed in this order:

a1 b1 a2 a3 a4 b2 b3     

but NOT as a block of a's followed by b's. The same applies to a statement with a sub-query. Have you looked at the query plans using EXPLAIN ?

In your case, you can try

UPDATE BALANCES WHERE ID IN (
 SELECT ID FROM some_function() FOR UPDATE  -- LOCK using FOR UPDATE 
 -- other transactions will WAIT / BLOCK temporarily on conc. write access
);

If possible by what you want to do, you can also use SELECT ... FOR UPDATE SKIP LOCK, which will skip already locked data to get back concurrency, which is lost by WAITing for another transaction to release a lock (FOR UPDATE). But this will not apply an UPDATE to locked rows, which your application logic might require. So run that later on (see point 1).

Also read LOST UPDATE about the LOST UPDATE and SKIP LOCKED about SKIP LOCKED. A queue might be an idea in your case, which is explained perfectly in the SKIP LOCKED reference, although relational DBMS are not meant to be queues.

HTH



来源:https://stackoverflow.com/questions/44660368/postgres-update-with-order-by-how-to-do-it

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!