Is there a queue implementation?

后端 未结 14 1051
时光取名叫无心
时光取名叫无心 2020-12-23 02:14

Can anyone suggest Go container for simple and fast FIF/queue, Go has 3 different containers: heap, list and vector. Which one is more

14条回答
  •  隐瞒了意图╮
    2020-12-23 03:02

    Most queue implementations are in one of three flavors: slice-based, linked list-based, and circular-buffer (ring-buffer) based.

    • Slice-based queues tend to waste memory because they do not reuse the memory previously occupied by removed items. Also, slice based queues tend to only be single-ended.
    • Linked list queues can be better about memory reuse, but are generally a little slower and use more memory overall because of the overhead of maintaining links. They can offer the ability to add and remove items from the middle of the queue without moving memory around, but if you are doing much of that a queue is the wrong data structure.
    • Ring-buffer queues offer all the efficiency of slices, with the advantage of not wasting memory. Fewer allocations means better performance. They are just as efficient adding and removing items from either end so you naturally get a double-ended queue. So, as a general recommendation I would recommend a ring-buffer based queue implementation. This is what is discussed in the rest of this post.

    The ring-buffer based queue reuses memory by wrapping its storage around: As the queue grows beyond one end of the underlying slice, it adds additional nodes to the other end of the slice. See deque diagram

    Also, a bit of code to illustrate:

    // PushBack appends an element to the back of the queue.  Implements FIFO when
    // elements are removed with PopFront(), and LIFO when elements are removed
    // with PopBack().
    func (q *Deque) PushBack(elem interface{}) {
        q.growIfFull()
        q.buf[q.tail] = elem
        // Calculate new tail position.
        q.tail = q.next(q.tail)
        q.count++
    }
    
    // next returns the next buffer position wrapping around buffer.
    func (q *Deque) next(i int) int {
        return (i + 1) & (len(q.buf) - 1) // bitwise modulus
    }
    

    This particular implementation always uses a buffer size that is a power of 2, and can therefore compute the bitwise modulus to be a little more efficient.

    This means the slice needs to grow only when all its capacity is used up. With a resizing strategy that avoids growing and shrinking storage on the same boundary, this makes it very memory efficient.

    Here is code that resizes the underlying slice buffer:

    // resize resizes the deque to fit exactly twice its current contents. This is
    // used to grow the queue when it is full, and also to shrink it when it is     
    // only a quarter full.                                                         
    func (q *Deque) resize() {
        newBuf := make([]interface{}, q.count<<1)
        if q.tail > q.head {
            copy(newBuf, q.buf[q.head:q.tail])
        } else {
            n := copy(newBuf, q.buf[q.head:])
            copy(newBuf[n:], q.buf[:q.tail])
        }
        q.head = 0
        q.tail = q.count
        q.buf = newBuf
    }
    

    Another thing to consider is if you want concurrency safety built into the implementation. You may want to avoid this so that you can do whatever works best for your concurrency strategy, and you certainly do not want it if your do not need it; same reason as not wanting a slice that has some built-in serialization.

    There are a number of ring-buffer based queue implementations for Go if you do a search on godoc for deque. Choose one that best suits your tastes.

提交回复
热议问题