I think that, in most cases, the ArrayBlockingQueue
will perform better than the LinkedBlockingQueue
. However, that is the case when there is alway
1 . LinkedBlockingQueue
( LinkedList
Implementation but not exactly JDK Implementation of LinkedList
. It uses static inner class Node
to maintain Links between elements )
Constructor for LinkedBlockingQueue
public LinkedBlockingQueue(int capacity)
{
if (capacity < = 0) throw new IllegalArgumentException();
this.capacity = capacity;
last = head = new Node< E >(null); // Maintains a underlying linkedlist. ( Use when size is not known )
}
Node
class used to maintain Links
static class Node<E> {
E item;
Node<E> next;
Node(E x) { item = x; }
}
2 . ArrayBlockingQueue
( Array Implementation )
Constructor for ArrayBlockingQueue
public ArrayBlockingQueue(int capacity, boolean fair)
{
if (capacity < = 0)
throw new IllegalArgumentException();
this.items = new Object[capacity]; // Maintains a underlying array
lock = new ReentrantLock(fair);
notEmpty = lock.newCondition();
notFull = lock.newCondition();
}
Biggest difference between ArrayBlockingQueue
and LinkedBlockingQueue
is clear from constructor, one has an underlying data structure of Array
and the other of LinkedList
.
ArrayBlockingQueue
uses single-lock double condition algorithm and LinkedBlockingQueue
is a variant of the "two lock queue" algorithm and it has 2 locks 2 conditions ( takeLock , putLock)
Till now I gave comparison between these 2 implementations Coming back to original question , Similar question was asked in concurrency mailing list in this doug Lea talks about DynamicArrayBlockingQueue which is implementation provided by Dawid Kurzyniec.
My 2 cents:
To start with, the bottom line here is you don't really care about the difference here because even when you are using a plain LinkedBlockingQueue, the performance is good enough when you are delivering some micro-second level systems. So the performance difference here isn't really that great.
If you are writing a mission-critical high performance system and you are using queues to pass messages between threads, you can always estimate the queue size needed by [Queue Size] = [Max acceptable delay] * [Max message rate]. Anything which can grow beyond such capacity means you suffer from a slow consumer problem. In a mission critical application, such delay means your system is malfunctioning. Some manual process might be needed to make sure the system is running properly.
In case your system isn't mission critical, you can simply pause (block) the publisher until some consumers are available.