I have a requirement to make a scalable process. The process has mainly I/O operations with some minor CPU operations (mainly deserializing strings). The process query the datab
Here are the selling points of the Semaphore approach:
And here are the selling points of the TPL Dataflow approach:
Let's review the following Semaphore implementation for example:
string[] urls = FetchUrlsFromDB();
var cts = new CancellationTokenSource();
var semaphore = new SemaphoreSlim(10); // Degree of parallelism (DOP)
Task[] tasks = urls.Select(url => Task.Run(async () =>
{
await semaphore.WaitAsync(cts.Token);
try
{
string rawData = DownloadData(url);
var data = Deserialize(rawData);
PersistToCRM(data);
MarkAsCompleted(url);
}
finally
{
semaphore.Release();
}
})).ToArray();
Task.WaitAll(tasks);
The above implementation ensures that at most 10 urls will be processed concurrently at any given moment. There will be no coordination between these parallel workflows though. So for example it is entirely possible that at a given moment all 10 parallel workflows will be downloading data, at another moment all 10 will be deserializing raw data, and at another moment all 10 will be persisting data to the CRM. This is far from ideal. Ideally you would like to have the bottleneck of the whole operation, either the network adapter, the CPU or the database server, to work non-stop all the time, and not be underutilized (or be completely idle) at various random moments.
Another consideration is how much parallelization is optimal for each of the heterogeneous operations. 10 DOP may be optimal for the communication with the web, but too low or too high for the communication with the database. The Semaphore approach does not allow for that level of fine-tuning. Your only option is to compromise by selecting a DOP value somewhere between these optimals.
If the number of urls is very large, lets say 1,000,000, then the Semaphore approach above raises also serious memory usage considerations. A url may have a size of 50 bytes on average, while a Task
that is connected to CancellationToken
may be 10 times heavier or more. Of course you could change the implementation and use the SemaphoreSlim in a more clever way that doesn't generate so many tasks, but this would go against the primary (and only) selling point of this approach, its simplicity.
The TPL Dataflow library solves all of these problems, at the cost of the (smallish) learning curve required in order to be able to tame this powerful tool.
The process has mainly IO operations with some minor CPU operations (mainly deserializing strings).
That's pretty much just I/O. Unless those strings are huge, the deserialization won't be worth parallelizing. The kind of CPU work you're doing will be lost in the noise.
So, you'll want to focus on concurrent asynchrony.
SemaphoreSlim
is the standard pattern for this, as you've found.ForEachAsync
can take several forms; note that in the blog post you referenced, there are 5 different implementations of this method, each of which are valid. "[T]here are many different semantics possible for iteration, and each will result in different design choices and implementations." For your purposes (not wanting CPU parallelization), you shouldn't consider the ones using Task.Run
or partitioning. In an asynchronous concurrency world, any ForEachAsync
implementation is just going to be syntactic sugar that hides which semantics it implements, which is why I tend to avoid it.
This leaves you with SemaphoreSlim
vs. ActionBlock
. I generally recommend people start with SemaphoreSlim
first, and consider moving to TPL Dataflow if their needs become more complex (in a way that seems like they would benefit from a dataflow pipeline).
E.g., "Part of the requirement is to make the parallelism degree configurable."
You may start off with allowing a degree of concurrency - where the thing being throttled is a single whole operation (fetch data from url, deserialize the downloaded data to objects, persist into crm dynamics and to another database, and update the first database). This is where SemaphoreSlim
would be a perfect solution.
But you may decide you want to have multiple knobs: say, one degree of concurrency for how many urls you're downloading, and a separate degree of concurrency for persisting, and a separate degree of concurrency for updating the original database. And then you'd also need to limit the "queues" in-between these points: only so many deserialized objects in-memory, etc. - to ensure that fast urls with slow databases don't cause problems with your app using too much memory. If these are useful semantics, then you have started approaching the problem from a dataflow perspective, and that's the point that you may be better served with a library like TPL Dataflow.