Throttling asynchronous tasks

后端 未结 3 434
心在旅途
心在旅途 2020-11-22 13:13

I would like to run a bunch of async tasks, with a limit on how many tasks may be pending completion at any given time.

Say you have 1000 URLs, and you only want to

相关标签:
3条回答
  • 2020-11-22 13:28

    As requested, here's the code I ended up going with.

    The work is set up in a master-detail configuration, and each master is processed as a batch. Each unit of work is queued up in this fashion:

    var success = true;
    
    // Start processing all the master records.
    Master master;
    while (null != (master = await StoredProcedures.ClaimRecordsAsync(...)))
    {
        await masterBuffer.SendAsync(master);
    }
    
    // Finished sending master records
    masterBuffer.Complete();
    
    // Now, wait for all the batches to complete.
    await batchAction.Completion;
    
    return success;
    

    Masters are buffered one at a time to save work for other outside processes. The details for each master are dispatched for work via the masterTransform TransformManyBlock. A BatchedJoinBlock is also created to collect the details in one batch.

    The actual work is done in the detailTransform TransformBlock, asynchronously, 150 at a time. BoundedCapacity is set to 300 to ensure that too many Masters don't get buffered at the beginning of the chain, while also leaving room for enough detail records to be queued to allow 150 records to be processed at one time. The block outputs an object to its targets, because it's filtered across the links depending on whether it's a Detail or Exception.

    The batchAction ActionBlock collects the output from all the batches, and performs bulk database updates, error logging, etc. for each batch.

    There will be several BatchedJoinBlocks, one for each master. Since each ISourceBlock is output sequentially and each batch only accepts the number of detail records associated with one master, the batches will be processed in order. Each block only outputs one group, and is unlinked on completion. Only the last batch block propagates its completion to the final ActionBlock.

    The dataflow network:

    // The dataflow network
    BufferBlock<Master> masterBuffer = null;
    TransformManyBlock<Master, Detail> masterTransform = null;
    TransformBlock<Detail, object> detailTransform = null;
    ActionBlock<Tuple<IList<object>, IList<object>>> batchAction = null;
    
    // Buffer master records to enable efficient throttling.
    masterBuffer = new BufferBlock<Master>(new DataflowBlockOptions { BoundedCapacity = 1 });
    
    // Sequentially transform master records into a stream of detail records.
    masterTransform = new TransformManyBlock<Master, Detail>(async masterRecord =>
    {
        var records = await StoredProcedures.GetObjectsAsync(masterRecord);
    
        // Filter the master records based on some criteria here
        var filteredRecords = records;
    
        // Only propagate completion to the last batch
        var propagateCompletion = masterBuffer.Completion.IsCompleted && masterTransform.InputCount == 0;
    
        // Create a batch join block to encapsulate the results of the master record.
        var batchjoinblock = new BatchedJoinBlock<object, object>(records.Count(), new GroupingDataflowBlockOptions { MaxNumberOfGroups = 1 });
    
        // Add the batch block to the detail transform pipeline's link queue, and link the batch block to the the batch action block.
        var detailLink1 = detailTransform.LinkTo(batchjoinblock.Target1, detailResult => detailResult is Detail);
        var detailLink2 = detailTransform.LinkTo(batchjoinblock.Target2, detailResult => detailResult is Exception);
        var batchLink = batchjoinblock.LinkTo(batchAction, new DataflowLinkOptions { PropagateCompletion = propagateCompletion });
    
        // Unlink batchjoinblock upon completion.
        // (the returned task does not need to be awaited, despite the warning.)
        batchjoinblock.Completion.ContinueWith(task =>
        {
            detailLink1.Dispose();
            detailLink2.Dispose();
            batchLink.Dispose();
        });
    
        return filteredRecords;
    }, new ExecutionDataflowBlockOptions { BoundedCapacity = 1 });
    
    // Process each detail record asynchronously, 150 at a time.
    detailTransform = new TransformBlock<Detail, object>(async detail => {
        try
        {
            // Perform the action for each detail here asynchronously
            await DoSomethingAsync();
    
            return detail;
        }
        catch (Exception e)
        {
            success = false;
            return e;
        }
    
    }, new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 150, BoundedCapacity = 300 });
    
    // Perform the proper action for each batch
    batchAction = new ActionBlock<Tuple<IList<object>, IList<object>>>(async batch =>
    {
        var details = batch.Item1.Cast<Detail>();
        var errors = batch.Item2.Cast<Exception>();
    
        // Do something with the batch here
    }, new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 4 });
    
    masterBuffer.LinkTo(masterTransform, new DataflowLinkOptions { PropagateCompletion = true });
    masterTransform.LinkTo(detailTransform, new DataflowLinkOptions { PropagateCompletion = true });
    
    0 讨论(0)
  • 2020-11-22 13:32

    As suggested, use TPL Dataflow.

    A TransformBlock<TInput, TOutput> may be what you're looking for.

    You define a MaxDegreeOfParallelism to limit how many strings can be transformed (i.e., how many urls can be downloaded) in parallel. You then post urls to the block, and when you're done you tell the block you're done adding items and you fetch the responses.

    var downloader = new TransformBlock<string, HttpResponse>(
            url => Download(url),
            new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 50 }
        );
    
    var buffer = new BufferBlock<HttpResponse>();
    downloader.LinkTo(buffer);
    
    foreach(var url in urls)
        downloader.Post(url);
        //or await downloader.SendAsync(url);
    
    downloader.Complete();
    await downloader.Completion;
    
    IList<HttpResponse> responses;
    if (buffer.TryReceiveAll(out responses))
    {
        //process responses
    }
    

    Note: The TransformBlock buffers both its input and output. Why, then, do we need to link it to a BufferBlock?

    Because the TransformBlock won't complete until all items (HttpResponse) have been consumed, and await downloader.Completion would hang. Instead, we let the downloader forward all its output to a dedicated buffer block - then we wait for the downloader to complete, and inspect the buffer block.

    0 讨论(0)
  • 2020-11-22 13:36

    Say you have 1000 URLs, and you only want to have 50 requests open at a time; but as soon as one request completes, you open up a connection to the next URL in the list. That way, there are always exactly 50 connections open at a time, until the URL list is exhausted.

    The following simple solution has surfaced many times here on SO. It doesn't use blocking code and doesn't create threads explicitly, so it scales very well:

    const int MAX_DOWNLOADS = 50;
    
    static async Task DownloadAsync(string[] urls)
    {
        using (var semaphore = new SemaphoreSlim(MAX_DOWNLOADS))
        using (var httpClient = new HttpClient())
        {
            var tasks = urls.Select(async url => 
            {
                await semaphore.WaitAsync();
                try
                {
                    var data = await httpClient.GetStringAsync(url);
                    Console.WriteLine(data);
                }
                finally
                {
                    semaphore.Release();
                }
            });
    
            await Task.WhenAll(tasks);
        }
    }
    

    The thing is, the processing of the downloaded data should be done on a different pipeline, with a different level of parallelism, especially if it's a CPU-bound processing.

    E.g., you'd probably want to have 4 threads concurrently doing the data processing (the number of CPU cores), and up to 50 pending requests for more data (which do not use threads at all). AFAICT, this is not what your code is currently doing.

    That's where TPL Dataflow or Rx may come in handy as a preferred solution. Yet it is certainly possible to implement something like this with plain TPL. Note, the only blocking code here is the one doing the actual data processing inside Task.Run:

    const int MAX_DOWNLOADS = 50;
    const int MAX_PROCESSORS = 4;
    
    // process data
    class Processing
    {
        SemaphoreSlim _semaphore = new SemaphoreSlim(MAX_PROCESSORS);
        HashSet<Task> _pending = new HashSet<Task>();
        object _lock = new Object();
    
        async Task ProcessAsync(string data)
        {
            await _semaphore.WaitAsync();
            try
            {
                await Task.Run(() =>
                {
                    // simuate work
                    Thread.Sleep(1000);
                    Console.WriteLine(data);
                });
            }
            finally
            {
                _semaphore.Release();
            }
        }
    
        public async void QueueItemAsync(string data)
        {
            var task = ProcessAsync(data);
            lock (_lock)
                _pending.Add(task);
            try
            {
                await task;
            }
            catch
            {
                if (!task.IsCanceled && !task.IsFaulted)
                    throw; // not the task's exception, rethrow
                // don't remove faulted/cancelled tasks from the list
                return;
            }
            // remove successfully completed tasks from the list 
            lock (_lock)
                _pending.Remove(task);
        }
    
        public async Task WaitForCompleteAsync()
        {
            Task[] tasks;
            lock (_lock)
                tasks = _pending.ToArray();
            await Task.WhenAll(tasks);
        }
    }
    
    // download data
    static async Task DownloadAsync(string[] urls)
    {
        var processing = new Processing();
    
        using (var semaphore = new SemaphoreSlim(MAX_DOWNLOADS))
        using (var httpClient = new HttpClient())
        {
            var tasks = urls.Select(async (url) =>
            {
                await semaphore.WaitAsync();
                try
                {
                    var data = await httpClient.GetStringAsync(url);
                    // put the result on the processing pipeline
                    processing.QueueItemAsync(data);
                }
                finally
                {
                    semaphore.Release();
                }
            });
    
            await Task.WhenAll(tasks.ToArray());
            await processing.WaitForCompleteAsync();
        }
    }
    
    0 讨论(0)
提交回复
热议问题