Imagine of a WebForms application where there is a main method named CreateAll(). I can describe the process of the method tasks step by step as follows:
1) Stores to da
Some code that may help you to achieve your goal.
public static class Retry
{
public static void Do(
Action action,
TimeSpan retryInterval,
int retryCount = 3)
{
Do<object>(() =>
{
action();
return null;
}, retryInterval, retryCount);
}
public static T Do<T>(
Func<T> action,
TimeSpan retryInterval,
int retryCount = 3)
{
var exceptions = new List<Exception>();
for (int retry = 0; retry < retryCount; retry++)
{
try
{
if (retry > 0)
Thread.Sleep(retryInterval);
return action();
}
catch (Exception ex)
{
exceptions.Add(ex);
}
}
throw new AggregateException(exceptions);
}
}
Call and retry as below:
int result = Retry.Do(SomeFunctionWhichReturnsInt, TimeSpan.FromSeconds(1), 4);
Ref: http://gist.github.com/KennyBu/ac56371b1666a949daf8
For me this sounds like 'Distributed Transactions', since you have different resources (database, service communication, file i/o) and want to make a transaction that possible involves all of them.
In C# you could solve this with Microsoft Distributed Transaction Coordinator. For every resource you need a resource manager. For databases, like sql server and file i/o, it is already available, as far as i know. For others you can develop your own.
As an example, to execute these transactions you can use the TransactionScope
class like this:
using (TransactionScope ts = new TransactionScope())
{
//all db code here
// if an error occurs jump out of the using block and it will dispose and rollback
ts.Complete();
}
(Example taken from here)
To develop your own resource manager, you have to implement IEnlistmentNotification
and that can be a fairly complex task. Here is a short example.
Well...sounds like a really, really nasty situation. You can't open a transaction, write something to the database and go walk your dog in the park. Because transactions have this nasty habit of locking resources for everyone. This eliminates your best option: distributed transactions.
I would execute all operations and prepare a reverse script as I go. If operation is a success I would purge the script. Otherwise I would run it. But this is open for potential pitfalls and script must be ready to handle them. For example - what if in the mid-time someone already updated the records you added; or calculated an aggregate based on your values?
Still: building a reverse script is the simple solution, no rocket science there. Just
List<Command> reverseScript;
and then, if you need to rollback:
using (TransactionScope tx= new TransactionScope()) {
foreach(Command cmd in reverseScript) cmd.Execute();
tx.Complete();
}
Look at using Polly for retry scenarios which seems to align well with your Pseudo code. At the end of this answer is a sample from the documentation. You can do all sorts of retry scenarios, retry and waits etc. For example, you could retry a complete transaction a number of times, or alternatively retry a set of idempotent actions a number of times and then write compensation logic if/when the retry policy finally fails.
A memento patterns is more for undo-redo logic that you would find in a word processor (Ctrl-Z and Ctrl-Y).
Other helpful patterns to look at is a simple queue, a persistent queue or even a service bus to give you eventual consistency without having to have the user wait for everything to complete successfully.
// Retry three times, calling an action on each retry
// with the current exception and retry count
Policy
.Handle<DivideByZeroException>()
.Retry(3, (exception, retryCount) =>
{
// do something
});
A sample based on your Pseudo-Code may look as follows:
static bool CreateAll(object1 obj1, object2 obj2)
{
// Policy to retry 3 times, waiting 5 seconds between retries.
var policy =
Policy
.Handle<SqlException>()
.WaitAndRetry(3, count =>
{
return TimeSpan.FromSeconds(5);
});
policy.Execute(() => UpdateDatabase1(obj1));
policy.Execute(() => UpdateDatabase2(obj2));
}
You can opt for Command pattern where each command contains all the necessary information like connection string, service url, retry count etc.On top of this, you can consider rx, data flow blocks to do the plumbing.
High level view:
Update: Intention is to have Separation Of Concern. Retry logic is confined to one class which is a wrapper to existing command. You can do more analysis and come up with proper command, invoker and receiver objects and add rollback functionality.
public abstract class BaseCommand
{
public abstract RxObservables Execute();
}
public class DBCommand : BaseCommand
{
public override RxObservables Execute()
{
return new RxObservables();
}
}
public class WebServiceCommand : BaseCommand
{
public override RxObservables Execute()
{
return new RxObservables();
}
}
public class ReTryCommand : BaseCommand // Decorator to existing db/web command
{
private readonly BaseCommand _baseCommand;
public RetryCommand(BaseCommand baseCommand)
{
_baseCommand = baseCommand
}
public override RxObservables Execute()
{
try
{
//retry using Polly or Custom
return _baseCommand.Execute();
}
catch (Exception)
{
throw;
}
}
}
public class TaskDispatcher
{
private readonly BaseCommand _baseCommand;
public TaskDispatcher(BaseCommand baseCommand)
{
_baseCommand = baseCommand;
}
public RxObservables ExecuteTask()
{
return _baseCommand.Execute();
}
}
public class Orchestrator
{
public void Orchestrate()
{
var taskDispatcherForDb = new TaskDispatcher(new ReTryCommand(new DBCommand));
var taskDispatcherForWeb = new TaskDispatcher(new ReTryCommand(new WebCommand));
var dbResultStream = taskDispatcherForDb.ExecuteTask();
var WebResultStream = taskDispatcherForDb.ExecuteTask();
}
}