I am using hibernate in my project and I am getting random Apparent Deadlocks for very simple database operations.
There is one of the Stack Traces: https://gist.github.
Because the deadlocks happen so frequently, it looks like some of the threads of the application are holding locks for an extended period of time.
Each thread in the application will use it's own database connection/connections while accessing the database, so from the point of view of the database two threads are two distinct clients that compete for database locks.
If a thread holds locks for an extended period of time and acquires them in a certain order, and a second thread comes along acquiring the same locks but on a different order, deadlock is bound to occur (see here for details on this frequent deadlock cause).
Also deadlocks are occurring in read operations, which means that some threads are acquiring read locks as well. This happens if the threads are running transactions in REPEATABLE_READ
isolation level or SERIALIZABLE
.
To solve this, try searching for usages of Isolation.REPEATABLE_READ
and Isolation.SERIALIZABLE
in the project, to see if this is being used.
As an alternative, use the default READ_COMMITTED
isolation level and annotate the entities with @Version
, to handle concurrency using optimistic locking instead.
Also try to identify long running transactions, this happens sometimes when the @Transactional
is placed at the wrong place and wraps for example the processing of a whole file in the example of a batch processing, instead of doing transactions line by line.
This a log4j configuration to log the creation/deletion of entity managers and transactions begin/commit/rollback:
<!-- spring entity manager and transactions -->
<logger name="org.springframework.orm.jpa" additivity ="false">
<level value="debug" />
<appender-ref ref="ConsoleAppender" />
</logger >
<logger name="org.springframework.transaction" additivity ="false">
<level value="debug" />
<appender-ref ref="ConsoleAppender" />
</logger >
Update queries are possible via native queries or JPQL.
In methods without @Transactional
, queries will be executed in it's own entity manager and return only detached entities, as thee session is closed immediatelly after the query is run.
so the lazy initialization exceptions in methods without @Transactional
is normal. You can set them to @Transactional(readOnly=true)
as well.
This is the error with MySQL.
The most easy way to resolve & avoid deadlocks is to reorder the DB operations happening in the application.
Deadlock mostly occurs when more than one resource/connection try to acquire more than one lock at opposite orders, as below:
connection 1: locks key(1), locks key(2);
connection 2: locks key(2), locks key(1);
In the scenario when both the connections execute at the same time, connection 1 will acquire lock on key(1), and connection 2 on key(2). After that both the connections will wait for other to release the lock on the key. This results in deadlock.
But, a little tweak in the order of the transactions, then deadlocks can be avoided.
connection 1: locks key(1), locks key(2);
connection 2: locks key(1), locks key(2);
Above re-order is deadlock proof.
Other ways to avoid deadlocks is to have a transaction management mechanism. Transaction management by Spring is almost plug-n-play. Moreover, you can have a deadlock retry policy in place. An interesting deadlock retry via Spring AOP can be found here. This way you just need to add the annotation to the method you want to retry in case of deadlock.
For more debug logs on deadlock to find out which statements are suspicious, try running the "show engine innodb status" diagnostics. Also, you can have a look at How to Cope with Deadlocks.
UPDATE: A scenario for deadlocks in transactional DB operations.
In a transactional database, a deadlock happens when two processes each within its own transaction updates two rows of information but in the opposite order. For example, process A updates row 1 then row 2 in the exact time-frame process B updates row 2 then row 1. Process A can't finish updating row 2 until process B is finished, but it cannot finish updating row 1 until process A finishes. No matter how much time is allowed to pass, this situation will never resolve itself and because of this database management systems will typically kill the transaction of the process that has done the least amount of work.
Shishir