Workings of AtomicReferenceArray

[亡魂溺海] 提交于 2019-12-10 10:33:46

问题


I am wondering if AtomicReferenceArray can be used as a replacement for ConcurrentLinkedQueue (if one could live with a bounded structure).

I currently have something like:

ConcurrentLinkedQueue<Object[]> queue = new ConcurrentLinkedQueue<Object[]>();

public void store(Price price, Instrument instrument, Object[] formats){
     Object[] elements = {price, instrument, formats};
     queue.offer( elements);
}

The store(..) is called by multiple threads.

I also have a consumer thread, which periodically wakes up and consumes the elements.

private class Consumer implements Runnable{

@Override
public void run(){

List<Object[]> holder = drain( queue );
for(Object[] elements : holder ){
   for( Object e : elements ){
      //process ...
   }
}

private final List<Object[]> drain(){
//...
}

}

Can I swap out ConcurrentLinkedQueue in favor of AtomicReferenceArray and still maintain thread safety aspect?

Specifically, atomically storing the elements and establishing a "happens before" relationship so the consumer thread sees all the elements stored by different threads?

I tried reading the source code for AtomicReferenceArray but still not absolutely sure.

Cheers


回答1:


An AtomicReferenceArray can be used as a lock-free single consumer / multiple producer ring buffer. I was experimenting with an implementation a few months ago and have a working prototype. The advantage is a reduction in garbage creation, better cache locality, and better performance when not full due to being simpler. The disadvantages are a lack of strict fifo semantics and poor performance when the buffer is full as a producer must wait for a drain to occur. This might be mitigated by falling back to a ConcurrentLinkedQueue to avoid stalls.

The happens-before edge must be seen by producers so that they acquire a unique slot. However as only a single consumer is required, this can be delayed until the draining is complete. In my usage the drain is amortized across threads, so the consumer is chosen by the successful acquisition of a try-lock. The release of that lock provides the edge, allowing the array updates to use lazy sets within the critical section.

I would only use this approach in specialized scenarios where performance is highly tuned. In my usage it makes sense as an internal implementation detail for a cache. I wouldn't use it in general, though.




回答2:


I am wondering if AtomicReferenceArray can be used as a replacement for ConcurrentLinkedQueue (if one could live with a bounded structure).

Basically, no.

While the individual updates to the array elements happen atomically, that is not sufficient for a queue. You also need keep track of where the head and tail of the queue are, and you can't do that atomically with the queue cell read / write unless there's some additional locking / synchronization going on.

(This is independent of the inner workings of AtomicReferenceArray. The problem is simply that the API won't provide you with the functionality that you require to update two things in a single atomic operation.)



来源:https://stackoverflow.com/questions/9138723/workings-of-atomicreferencearray

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!