implement-your-own blocking queue in java

后端 未结 4 1016
日久生厌
日久生厌 2021-02-02 00:01

I know this question has been asked and answered many times before, but I just couldn\'t figure out a trick on the examples found around internet, like this or that one.

相关标签:
4条回答
  • 2021-02-02 00:12

    I think logically there is no harm doing that extra check before notifyAll().

    You can simply notifyAll() once you put/get something from the queue. Everything will still work, and your code is shorter. However, there is also no harm checking if anyone is potentially waiting (by checking if hitting the boundary of queue) before you invoke notifyAll(). This extra piece of logic saves unnecessary notifyAll() invocations.

    It just depends on you want a shorter and cleaner code, or you want your code to run more efficiently. (Haven't looked into notifyAll() 's implementation. If it is a cheap operation if there is no-one waiting, the performance gain may not be obvious for that extra checking anyway)

    0 讨论(0)
  • 2021-02-02 00:15

    The reason why the authors used notifyAll() is simple: they had no clue whether or not it was necessary, so they decided for the "safer" option.

    In the above example it would be sufficient to just call notify() as for each single element added, only a single thread waiting can be served under all circumstances.

    This becomes more obvious, if your queue as well has the option to add multiple elements in one step like addAll(Collection<T> list), as in this case more than one thread waiting on an empty list could be served, to be exact: as many threads as elements have been added.

    The notifyAll() however causes an extra overhead in the special single-element case, as many threads are woken up unnecessarily and therefore have to be put to sleep again, blocking queue access in the meantime. So replacing notifyAll() with notify() would improve speed in this special case.

    But then not using wait/notify and synchronized at all, but instead use the concurrent package would increase speed by a lot more than any smart wait/notify implementation could ever get to.

    0 讨论(0)
  • 2021-02-02 00:18

    I know this is an old question by now, but after reading the question and answers I couldn't help my self, I hope you find this useful.

    Regarding checking if the queue is actually full or empty before notifying other waiting threads, you're missing something which is both methods put (T t) and T get() are both synchronized methods, meaning that only one thread can enter one of these methods at a time, yet this will not prevent them from working together, so if a thread-a has entered put (T t) method another thread-b can still enter and start executing the instructions in T get() method before thread-a has exited put (T t), and so this double-checking design is will make the developer feel a little bit more safe because you can't know if future cpu context switching if will or when will happen.

    A better and a more recommended approach is to use Reentrant Locks and Conditions:

    //I've edited the source code from this link

    Condition isFullCondition;
    Condition isEmptyCondition;
    Lock lock;
    
    public BQueue() {
        this(Integer.MAX_VALUE);
    }
    
    public BQueue(int limit) {
        this.limit = limit;
        lock = new ReentrantLock();
        isFullCondition = lock.newCondition();
        isEmptyCondition = lock.newCondition();
    }
    
    public void put (T t) {
        lock.lock();
        try {
           while (isFull()) {
                try {
                    isFullCondition.await();
                } catch (InterruptedException ex) {}
            }
            q.add(t);
            isEmptyCondition.signalAll();
        } finally {
            lock.unlock();
        }
     }
    
    public T get() {
        T t = null;
        lock.lock();
        try {
            while (isEmpty()) {
                try {
                    isEmptyCondition.await();
                } catch (InterruptedException ex) {}
            }
            t = q.poll();
            isFullCondition.signalAll();
        } finally { 
            lock.unlock();
        }
        return t;
    }
    

    Using this approach there's no need for double checking, because the lock object is shared between the two methods, meaning only one thread a or b can enter any of these methods at a time unlike synchronized methods which creates different monitors, and only those threads waiting because the queue is full will be notified when there's more space, and the same goes for threads waiting because the queue is empty, this will lead to a better cpu utilization. you can find more detailed example with source code here

    0 讨论(0)
  • 2021-02-02 00:26

    I would like to write a simple blocking queue implementation which will help the people to understand this easily. This is for someone who is novice to this.

    class BlockingQueue {
    private List queue = new LinkedList();
    
    private int limit = 10;
    
    public BlockingQueue(int limit){
        this.limit = limit;
    }
    
    public synchronized void enqueue(Object ele) throws InterruptedException {
        while(queue.size() == limit)
            wait();
        if(queue.size() == 0)
            notifyAll();
        // add
        queue.add(ele);
    }
    
    public synchronized Object deque() throws InterruptedException {
        while (queue.size() == 0)
            wait();
        if(queue.size() == limit)
            notifyAll();
        return queue.remove(0);
    }
    

    }


    0 讨论(0)
提交回复
热议问题