问题
I'm quite new in multithreading programming, I just know the most common Producer-Consumer-Queue. I'm using the boost c++ libraries and I don't know if is better use boost::lockfree::queue or a wrapper class around std::queue that is using `mutex` and `condition_variable`.
Where is better using lock free data structures and where is better is use a simple implementation based on `mutex` and `condition_variables`?
回答1:
Try both in your app, see which performs best.
Typically, polling a lock-free queue works best when the queue nearly always has entries, a blocking queue works best when the queue is nearly always empty.
The downside of blocking queues is latency, typically of the order of 2-20 uS, due to kernel signaling. This can be mitigated by designing the system so that the work done by the consumer threads on each queued item takes much longer than this interval.
The downside of non-blocking queues is the waste of CPU and memory bandwidth while polling an empty queue. This can be mitigated by designing the system so that the queue is rarely empty.
As already hinted at by commenters, a non-blocking queue is a very bad idea on single-CPU systems.
回答2:
(Supplemental)
As of 1.54, you should be aware of some requirements
boost::lockfree::queue
- T must have a copy constructor
- T must have a trivial assignment operator
- T must have a trivial destructor
boost::lockfree::stack
- T must have a copy constructor
boost::lockfree::spsc_queue
- T must have a default constructor
- T must be copyable
回答3:
You might also use a lock-free queue to avoid priority inversion in a real-time application.
For example, OpenSL on Android provides audio buffer queue callbacks on a high-priority thread. If this thread has to wait for locks held by a lower priority thread, its high-priority scheduling counts for nothing, the callbacks become irregular, and you may start to underrun your audio buffer -- which results in some unpleasant popping sounds.
回答4:
The decision boils down to asking the question: "will lock contention be a problem for the task to be solved?"
Locking in a concurrent environment addresses two distinct concerns
- correctness: ensure that the code actually does what is intended. Prevent interference from other threads.
- throughput/scalability: allow a sustained high "flow" of concurrent operations passing through the system. Allow to scale up the performance of the system by adding more resources.
These two concerns are antagonistic goals. The classical approach is to protect a shared datastructure with global locks. This ensures 100% correctness, but it prevents scaling up the performance and especially the concurrency level above a certain degree, since the shared locks lead to "traffic congestion"
Incidentally, you need to be careful when using the term "lock free". Strictly speaking, collaborative operations can never be 100% lock free. But is is possible to arrange the collaboration cleverly, so to reduce the impact of blocking to those partners which really need to access the same element concurrently.
来源:https://stackoverflow.com/questions/16275112/boost-c-lock-free-queue-vs-shared-queue