PostgreSQL row read lock

后端 未结 1 1023
执笔经年
执笔经年 2021-02-10 22:32

Let’s say I have a table called Withdrawals (id, amount, user_id, status).

Whenever I a withdrawal is initiated this is the flow:

  • Verify if user has suffic
相关标签:
1条回答
  • 2021-02-10 22:59

    This is a property of transaction isolation. There is a lot written about it and I would highly recommend the overview in Designing Data-Intensive Applications. I found it to be the most helpful description in bettering my personal understanding.

    The default postgres level is READ COMMITTED which allows each of these concurrent transactions to see a similiar (funds available state) even though they should be dependent.

    One way to address this would be to mark each of these transactions as "SERIALIZABLE" consistency.

    SERIALIZABLE All statements of the current transaction can only see rows committed before the first query or data-modification statement was executed in this transaction. If a pattern of reads and writes among concurrent serializable transactions would create a situation which could not have occurred for any serial (one-at-a-time) execution of those transactions, one of them will be rolled back with a serialization_failure error.

    This should enforce the correctness of your application at a cost to availability, Ie in this case the second transaction will not be allowed to modify the records and would be rejected, which would require a retry. For a POC or a low traffic application this is usually a perfectly acceptable first step as you can ensure correctness for right now.


    Also in the book referenced above I think there was an example of how ATM's handle availability. They allow for this race condition and the user to overdraw if they are unable to connect to the centralized bank but bound the maximum withdraw to minimize the blast radius!


    Another architectural way to address this is to take the transactions offline and make them asynchronous, so that each user invoked transaction is published to a queue, and then by having a single consumer of the queue you naturally avoid any race conditions. The tradeoff here is similar there is a fixed throughput available from a single worker, but it does help to address the correctness issue for right now :P


    Locking across machines (like using redis across postgres/grpc) called distributed locking and has a good amount written about it https://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html

    0 讨论(0)
提交回复
热议问题