How is Hibernate deciding order of update/insert/delete

前端 未结 2 981
半阙折子戏
半阙折子戏 2020-12-05 02:30

Let\'s first forget about Hibernate. Assume that I have two tables, A & B. Two transactions are updating same records in these two tables, but txn 1 update B and then

相关标签:
2条回答
  • 2020-12-05 02:57

    Regarding first example - this kind of stuff is handled by database (read more about transaction isolation levels and locking strategies of your database). There are many different ways this can be handled.

    As to Hibernate, javadoc to org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(EventSource) says:

    Execute all SQL and second-level cache updates, in a special order so that foreign-key constraints cannot be violated:

    1. Inserts, in the order they were performed
    2. Updates Deletion of collection elements
    3. Insertion of collection elements
    4. Deletes, in the order they were performed

    I assume that this is the only optimization of executed SQL queries Hibernate makes. The rest of problems is handled by database.

    0 讨论(0)
  • 2020-12-05 03:00

    The problem you describe is not handled by the database, and from my experience is not entirely handled by Hibernate either.

    You have to take explicit steps to avoid it being a problem.

    Hibernate does some of the work for you. As per the previous answer, Hibernate ensures that within an isolated flush the inserts, deletes and updates are ordered in a way that ensures that they will be applied in an achievable order. See performExecutions(EventSource session) in the AbstractFlushingEventListener class:

    Execute all SQL (and second-level cache updates) in a special order so that foreign-key constraints cannot be violated:

    1. Inserts, in the order they were performed
    2. Updates
    3. Deletion of collection elements
    4. Insertion of collection elements
    5. Deletes, in the order they were performed

    When having unique constraints it's very important to know this order, especially if you want to replace a one-to-many child (delete old/insert new) but both the old and the new child share the same unique constraints (e.g. same email address). In this case you could update the old entry, instead of deleting/inserting, or you could flush after delete only to then continue inserting. For a more detailed example you can check this article.

    Note that it does not specify the order of updates. Examining the Hibernate code leads me to think the update order will depend on the order in which the entities were added to the persistence context, NOT the order they were updated. That might be predictable in your code, but reading the Hibernate code did not leave me feeling I would rely on that ordering.

    There are three solutions I can think of:

    1. Try setting hibernate.order_updates to be true. This should help avoid deadlocks when multiple rows in the same table are being updated, but won't help with deadlocks across multiple tables.
    2. Make your transactions take a PESSIMISTIC_WRITE lock on one of the entities before doing any updates. Which entity you use will depend on your specific situation, but so long as you ensure an entity is chosen consistently if there is a risk of deadlock, this will block the rest of the transaction until the lock can be obtained.
    3. Write your code to catch deadlocks when they occur and retry in a sensible fashion. The component managing the dead-lock retry must be located outside of the current transaction boundary. This is because the failing session must be closed and the associated transaction roll-backed. In this article you can find an example of an automatic retrying AOP Aspect.
    0 讨论(0)
提交回复
热议问题