I have a class \'Database\' that works as a wrapper for ADO.net. For instance, when I need to execute a procedure, I call Database.ExecuteProcedure(procedureName, parameters
If you are getting problems with deadlocks, it would be better to look at what the SQL code is doing. For example, lock-escalation deadlocks are very easy to create if you have serializable isolation level (or whatever the equivalent is in your rdbms) - and can be mitigated in a few ways, such as re-ordering queries, or (in SQL Server at least) using the (UPDLOCK) to take a write lock earlier (so you don't get a competing read-lock).
Re-trying is going to be mixed... for example, if you are in a TransactionScope, it might already have aborted. But just at the purist level - if I get problems talking to the db I want my code to panic, and panic early... re-try seems a bit hacky in this particular scenario.
First, I would review my SQL 2000 code and get to the bottom of why this deadlock is happening. Fixing this may be hiding a bigger problem (Eg. missing index or bad query).
Second I would review my architecture to confirm the deadlocking statement really needs to be called that frequently (Does select count(*) from bob
have to be called 100 times a second?).
However, if you really need some deadlock support and have no errors in your SQL or architecture try something along the following lines. (Note: I have had to use this technique for a system supporting thousands of queries per second and would hit deadlocks quite rarely)
int retryCount = 3;
bool success = false;
while (retryCount > 0 && !success)
{
try
{
// your sql here
success = true;
}
catch (SqlException exception)
{
if (exception.Number != 1205)
{
// a sql exception that is not a deadlock
throw;
}
// Add delay here if you wish.
retryCount--;
if (retryCount == 0) throw;
}
}
If the deadlock can be solved at the data layer, that's definitely the way to go. Locking hints, redesigning the way the module works and so on. NoLock isn't a panacea though - sometimes it's not possible to use for reasons of transactional integrity and I have had cases of straight (albeit complex) data reads with all relevant tables NoLock'd that still caused blocks on other queries.
Anyway - if you can't solve it at the data layer for whatever reason, how about
bool OK = false;
Random Rnd = new Random();
while(!OK)
{
try
{
rows = Command.ExecuteNonQuery();
OK = true;
}
catch(Exception exDead)
{
if(exDead.Message.ToLower().Contains("deadlock"))
System.Threading.Thread.Sleep(Rnd.Next(1000, 5000));
else
throw exDead;
}
}
Building on @Sam's response, I present a general purpose retry wrapper method:
private static T Retry<T>(Func<T> func)
{
int count = 3;
TimeSpan delay = TimeSpan.FromSeconds(5);
while (true)
{
try
{
return func();
}
catch(SqlException e)
{
--count;
if (count <= 0) throw;
if (e.Number == 1205)
_log.Debug("Deadlock, retrying", e);
else if (e.Number == -2)
_log.Debug("Timeout, retrying", e);
else
throw;
Thread.Sleep(delay);
}
}
}
private static void Retry(Action action)
{
Retry(() => { action(); return true; });
}
// Example usage
protected static void Execute(string connectionString, string commandString)
{
_log.DebugFormat("SQL Execute \"{0}\" on {1}", commandString, connectionString);
Retry(() => {
using (SqlConnection connection = new SqlConnection(connectionString))
using (SqlCommand command = new SqlCommand(commandString, connection))
command.ExecuteNonQuery();
});
}
protected static T GetValue<T>(string connectionString, string commandString)
{
_log.DebugFormat("SQL Scalar Query \"{0}\" on {1}", commandString, connectionString);
return Retry(() => {
using (SqlConnection connection = new SqlConnection(connectionString))
using (SqlCommand command = new SqlCommand(commandString, connection))
{
object value = command.ExecuteScalar();
if (value is DBNull) return default(T);
return (T) value;
}
});
}