We have an api which uses hibernate as ORM tool and we use c3p0 as the connection pool handler. We have no problems when we are under load. However, we are running out into \"un
In the section of High availability and clustering in MySQL Java Connector, take a look at the properties; specifically autoReconnect
and autoReconnetForPools
.
Use the properties in your JDBC connection URL.
They have helped me before when using MySQL, Hibernate, and C3P0. Hope that this helps.
So you have a checkoutTimeout of 3 secs (3000 msecs) set. That's the Exception you're seeing. Clients are only permitted to wait for three seconds to checkout a Connection from the pool; if three seconds isn't enough, they see your Exception.
The question is, why are clients taking so long to get a Connection? Normally checking out a Connection is a pretty fast operation. But if all Connections are checked out, then clients have to wait for (slow) Connection acquisition from the database.
You have your pool configured to pretty aggressively cull Connections. Any number of Connections above minPoolSize=5 will be destroyed if they are idle for more than maxIdleTimeExcessConnections=30 seconds. Yet your pool is configured for large-scale bursts: maxPoolSize=125. Suppose that your app is quiet for a while, and then gets a burst of Connection requests from clients. The pool will quickly run out of Connections and start to acquire, in bursts of acquireIncrement=5. But if there are suddenly 25 clients and the pool has only 5 Connections, it's not improbable that the 25th client might time out before acquiring a Connection.
There's lots you can do. These tweaks are separable, you can mix or match as you see fit.
Cull idle "excess" Connections less aggressively, so that in general, your pool has some capacity to service bursts of requests. You might drop maxIdleTimeExcessConnections entirely, and let Connections slowly wither after maxIdleTime=180 seconds of disuse. (Downside? A larger resource footprint for longer during periods of inactivity.)
Set minPoolSize to a higher value, so that it's unlikely that the pool will see a burst of activity for which it has way too few Connections. (Downside? Larger permanent resource footprint.)
Drop checkoutTimeout from your config. c3p0's default is to allow clients to wait indefinitely for a Connection. (Downside? Maybe you prefer clients to quickly report a failure rather than wait for possible success.)
I don't think that the problem that you are observing has much to do with Connection testing or MySQL timeouts per se, but that doesn't mean you should not deal with those issues. I'll defer to nobeh's advice on the MySQL reconnect issue. (I'm not a big MySQL user.) You should consider implementing Connection testing. You have a preferredTestQuery, so tests should be reasonably fast. My usual choice is to use testConnectionOnCheckin and idleConnectionTestPeriod. See http://www.mchange.com/projects/c3p0/#configuring_connection_testing
Good luck!