It seems there are lots of improvements in .NET 4.0 related to concurrency that might rely on concurrent priority queues. Is there decent priority queue implementation insid
Maybe you can use my own implementation of a PriorityQueue. It implements alot more than the usual push/pop/peek, features that I implemented whenever I found the need for it. It also has locks for concurrency.
Comments to the code is much appreciated :)
public class PriorityQueue<T> where T : class
{
private readonly object lockObject = new object();
private readonly SortedList<int, Queue<T>> list = new SortedList<int, Queue<T>>();
public int Count
{
get
{
lock (this.lockObject)
{
return list.Sum(keyValuePair => keyValuePair.Value.Count);
}
}
}
public void Push(int priority, T item)
{
lock (this.lockObject)
{
if (!this.list.ContainsKey(priority))
this.list.Add(priority, new Queue<T>());
this.list[priority].Enqueue(item);
}
}
public T Pop()
{
lock (this.lockObject)
{
if (this.list.Count > 0)
{
T obj = this.list.First().Value.Dequeue();
if (this.list.First().Value.Count == 0)
this.list.Remove(this.list.First().Key);
return obj;
}
}
return null;
}
public T PopPriority(int priority)
{
lock (this.lockObject)
{
if (this.list.ContainsKey(priority))
{
T obj = this.list[priority].Dequeue();
if (this.list[priority].Count == 0)
this.list.Remove(priority);
return obj;
}
}
return null;
}
public IEnumerable<T> PopAllPriority(int priority)
{
List<T> ret = new List<T>();
lock(this.lockObject)
{
if (this.list.ContainsKey(priority))
{
while(this.list.ContainsKey(priority) && this.list[priority].Count > 0)
ret.Add(PopPriority(priority));
return ret;
}
}
return ret;
}
public T Peek()
{
lock (this.lockObject)
{
if (this.list.Count > 0)
return this.list.First().Value.Peek();
}
return null;
}
public IEnumerable<T> PeekAll()
{
List<T> ret = new List<T>();
lock (this.lockObject)
{
foreach (KeyValuePair<int, Queue<T>> keyValuePair in list)
ret.AddRange(keyValuePair.Value.AsEnumerable());
}
return ret;
}
public IEnumerable<T> PopAll()
{
List<T> ret = new List<T>();
lock (this.lockObject)
{
while (this.list.Count > 0)
ret.Add(Pop());
}
return ret;
}
}
There is an implementation as part of "Samples for Parallel Programming with the .NET Framework" at msdn. See ParallelExtensionsExtras.
Direct link to source code for file ConcurrentPriorityQueue.cs
Since all the current answers are out-of-date or don't offer a viable solution, there's an implementation on MSDN that's usable. Note that lower priorities get processed first in this implementation.
Recently, I was creating a state machine in which I needed time-stamped events. Rather than just a simple clock tick, I needed timed events with their own IDs so that I could distinguish one event from another.
Researching this problem led me to the idea of using a priority queue. I could en-queue the timed events along with their information in any order; the priority queue would take care of ordering the events properly. A timer would periodically check the priority queue to see if it is time for the event at the head of the queue to fire. If so, it de-queues the event and invokes the delegate associated with it. This approach was exactly what I was looking for.
Searching here at CodeProject
https://www.codeproject.com/Articles/13295/A-Priority-Queue-in-C
I found that a priority queue[^] class had already been written. However, it occurred to me that I could easily write my own using my old friend, the skip list. This would have the advantage that the de-queue operation would only take O(1) time, while the en-queue operation would still be log(n) on average. I thought that using skip lists in this way was novel enough that it merits its own article.
So here it is. I hope you find it interesting.
Options:
1) If your queue isn't ever going to become large, use a heap and lock the entire structure for each insertion and deletion.
2) If your queue is going to become large, you could use an algorithm like this:
http://www.research.ibm.com/people/m/michael/ipl-1996.pdf
This algorithm allows multiple threads to be working with the heap structure concurrently without risking corruption or deadlocks by supporting fine-grained locking of just parts of the tree at once. You'd have to benchmark to see whether the overhead of additional locking and unlocking operations cost more than contention over locking the entire heap.
3) If you aim to avoid locks altogether, another algorithm, mentioned in the link above, suggests using a FIFO queue of requests (easily implementable with no locks), and a separate thread which is the only thing that touches the heap. You'd have to measure to see how the overhead of switching focus between threads using synchronization objects compared to the overhead of plain straight-up locking.
Before you even get started, it would be worthwhile seeing just how bad the hit is on a straightforward implementation using locking. It may not be the most efficient implementation, but if it still performs orders of magnitude faster than you'll ever need then the ease of maintenance (that is, anyone, including yourself a year can now, being able to simply look at the code and understand what it does) may outweigh the tiny fraction of CPU time spent busy in the queuing mechanism.
Hope this helps :-)
You may need to roll your own. A relatively easy way would be to have an array of regular queues, with priority decreasing.
Basically, you would insert into the queue for the appropriate priority. Then, on the consumer side, you would go down the list, from highest to lowest priority, checking to see if the queue is non-empty, and consuming an entry if so.