Workings of AtomicReferenceArray

主宰稳场 提交于 2019-12-06 05:11:10

An AtomicReferenceArray can be used as a lock-free single consumer / multiple producer ring buffer. I was experimenting with an implementation a few months ago and have a working prototype. The advantage is a reduction in garbage creation, better cache locality, and better performance when not full due to being simpler. The disadvantages are a lack of strict fifo semantics and poor performance when the buffer is full as a producer must wait for a drain to occur. This might be mitigated by falling back to a ConcurrentLinkedQueue to avoid stalls.

The happens-before edge must be seen by producers so that they acquire a unique slot. However as only a single consumer is required, this can be delayed until the draining is complete. In my usage the drain is amortized across threads, so the consumer is chosen by the successful acquisition of a try-lock. The release of that lock provides the edge, allowing the array updates to use lazy sets within the critical section.

I would only use this approach in specialized scenarios where performance is highly tuned. In my usage it makes sense as an internal implementation detail for a cache. I wouldn't use it in general, though.

I am wondering if AtomicReferenceArray can be used as a replacement for ConcurrentLinkedQueue (if one could live with a bounded structure).

Basically, no.

While the individual updates to the array elements happen atomically, that is not sufficient for a queue. You also need keep track of where the head and tail of the queue are, and you can't do that atomically with the queue cell read / write unless there's some additional locking / synchronization going on.

(This is independent of the inner workings of AtomicReferenceArray. The problem is simply that the API won't provide you with the functionality that you require to update two things in a single atomic operation.)

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!