blockingcollection

How to implement generic callbacks using the C# Task Parallel Library and IProducerConsumerCollection?

自古美人都是妖i 提交于 2019-12-06 09:05:17
I have a component that submits requests to a web-based API, but these requests must be throttled so as not to contravene the API's data limits. This means that all requests must pass through a queue to control the rate at which they are submitted, but they can (and should) execute concurrently to achieve maximum throughput. Each request must return some data to the calling code at some point in the future when it completes. I'm struggling to create a nice model to handle the return of data. Using a BlockingCollection I can't just return a Task<TResult> from the Schedule method, because the

update an ObservableCollection with a BlockingCollection

倖福魔咒の 提交于 2019-12-06 04:38:19
问题 I subscribe to a service that will raise an Event when a new element is received, I add this element to a BlockingCollection . I have a second thread running that will loop the BlockingCollection to add/update the element in an observable collection. The problem is how do you do add in the ObservableCollection ? I know I can't just do an .add on this type of collection, as it needs to be done on the UI thread. So I tried using different ObservableCollection subclass that use the dispatcher to

blocking collection process n items at a time - continuing as soon as 1 is done

烈酒焚心 提交于 2019-12-06 02:01:50
I have the following Scenario. I take 50 jobs from the database into a blocking collection. Each job is a long running one. (potentially could be). So I want to run them in a separate thread. (I know - it may be better to run them as Task.WhenAll and let the TPL figure it out - but I want to control how many runs simultaneously) Say I want to run 5 of them simultaneously (configurable) I create 5 tasks (TPL), one for each job and run them in parallel. What I want to do is to pick up the next Job in the blocking collection as soon as one of the jobs from step 4 is complete and keep going until

The .Net Concurrent BlockingCollection has a memory leak?

随声附和 提交于 2019-12-05 06:37:49
I'm using a Producer/Consumer Pattern with a System.Collection.Concurrent.BlockingCollection<DataTable > to retrieve data from a database (producer) and create a Lucene Index on the data (consumer). The Producer grabs 10000 records at a time and adds the set to the BlockingCollection<DataTable> . The consumer (which is a little bit slower) then grabs those 10000 and creates an index. The blocking collection is bounded to 5 <DataTable> of 10000 rows each. At first the program runs great but after its gets about 150000 rows in I noticed that my computers memory is maxed out and it slows to a

update an ObservableCollection with a BlockingCollection

若如初见. 提交于 2019-12-04 08:44:57
I subscribe to a service that will raise an Event when a new element is received, I add this element to a BlockingCollection . I have a second thread running that will loop the BlockingCollection to add/update the element in an observable collection. The problem is how do you do add in the ObservableCollection ? I know I can't just do an .add on this type of collection, as it needs to be done on the UI thread. So I tried using different ObservableCollection subclass that use the dispatcher to marshal the adding of element, but every single time, I end up with the same error "An unhandled

What is the Difference between ArrayBlockingQueue and LinkedBlockingQueue

断了今生、忘了曾经 提交于 2019-12-04 07:43:11
问题 What scenarios is it better to use an ArrayBlockingQueue and when is it better to use a LinkedBlockingQueue? If LinkedBlockingQueue default capacity is equal to MAX Integer, is it really helpful to use it as BlockingQueue with default capacity? 回答1: ArrayBlockingQueue is backed by an array that size will never change after creation. Setting the capacity to Integer.MAX_VALUE would create a big array with high costs in space. ArrayBlockingQueue is always bounded. LinkedBlockingQueue creates

How to speed up a chunky BlockingCollection implementation

ⅰ亾dé卋堺 提交于 2019-12-02 23:15:22
问题 I have used many times the BlockingCollection for implementing the producer/consumer pattern, but I have experienced bad performance with extremely granular data because of the associated overhead. This usually forces me to improvise by chunkifying/partitioning my data, in other words using a BlockingCollection<T[]> instead of BlockingCollection<T> . Here is a resent example. This works but it's ugly and error-prone. I end up using nested loops at both the producer and the consumer, and I

Why does iterating over GetConsumingEnumerable() not fully empty the underlying blocking collection

邮差的信 提交于 2019-12-02 22:40:41
I have a quantifiable & repeatable problem using the Task Parallel Library, BlockingCollection<T> , ConcurrentQueue<T> & GetConsumingEnumerable while trying to create a simple pipeline. In a nutshell, adding entries to a default BlockingCollection<T> (which under the hood is relying on a ConcurrentQueue<T> ) from one thread, does not guarantee that they will be popped off the BlockingCollection<T> from another thread calling the GetConsumingEnumerable() Method. I've created a very simple Winforms Application to reproduce/simulate this which just prints integers to the screen. Timer1 is

What is the Difference between ArrayBlockingQueue and LinkedBlockingQueue

不打扰是莪最后的温柔 提交于 2019-12-02 17:16:45
What scenarios is it better to use an ArrayBlockingQueue and when is it better to use a LinkedBlockingQueue? If LinkedBlockingQueue default capacity is equal to MAX Integer, is it really helpful to use it as BlockingQueue with default capacity? ArrayBlockingQueue is backed by an array that size will never change after creation. Setting the capacity to Integer.MAX_VALUE would create a big array with high costs in space. ArrayBlockingQueue is always bounded. LinkedBlockingQueue creates nodes dynamically until the capacity is reached. This is by default Integer.MAX_VALUE . Using such a big

How to speed up a chunky BlockingCollection implementation

雨燕双飞 提交于 2019-12-02 10:05:53
I have used many times the BlockingCollection for implementing the producer/consumer pattern, but I have experienced bad performance with extremely granular data because of the associated overhead. This usually forces me to improvise by chunkifying/partitioning my data, in other words using a BlockingCollection<T[]> instead of BlockingCollection<T> . Here is a resent example . This works but it's ugly and error-prone. I end up using nested loops at both the producer and the consumer, and I must remember to Add what is left at the end of a producer's workload. So I had the idea of implementing