问题
Accoding to cppreference.com:
The thread that intends to modify the variable has to
- acquire a std::mutex (typically via std::lock_guard)
- perform the modification while the lock is held
- execute notify_one or notify_all on the std::condition_variable (the lock does not need to be held for notification)
Even if the shared variable is atomic, it must be modified under the mutex in order to correctly publish the modification to the waiting thread.
I'm not quite understand, why modifying a atomic variable need to require an lock. Please see following code snippet:
static std::atomic_bool s_run {true};
static std::atomic_bool s_hasEvent {false};
static std::mutex s_mtx;
static std::condition_variabel s_cv;
// Thread A - the consumer thread
function threadA()
{
while (s_run)
{
{
std::unique_lock<std::mutex> lock(s_mtx);
s_cv.wait(lock, [this]{
return m_hasEvents.load(std::memory_order_relaxed);
});
}
// process event
event = lockfree_queue.pop();
..... code to process the event ....
}
}
// Thread B - publisher thread
function PushEvent(event)
{
lockfree_queque.push(event)
s_hasEvent.store(true, std::memory_order_release);
s_cv.notify_one();
}
In the PushEvent function, I do not acquire s_mtx because s_hasEvent is an atomic variable and the queue is lockfree. What is the problem w/o acquire the s_mtx lock?
回答1:
As noted in Yakk's answer to the question you linked to it is to protect against this sequence of events causing a missed wake-up:
- Thread A locks the mutex.
- Thread A calls the lambda's closure which does
m_hasEvents.load(std::memory_order_relaxed);
and returns the valuefalse
.
- Thread A calls the lambda's closure which does
- Thread A is interrupted by the scheduler and Thread B starts to run.
- Thread B pushes an event into the queue and stores to
s_hasEvent
- Thread B pushes an event into the queue and stores to
- Thread B runs
s_cv.notify_one()
.
- Thread B runs
- Thread B is interrupted by the scheduler and Thread A runs again.
- Thread A evaluates the
false
result returned by the closure, deciding there are no pending events.
- Thread A evaluates the
- Thread A blocks on the condition variable, waiting for an event.
This means the notify_one()
call has been missed, and the condition variable will block even though there is an event ready in the queue.
If the update to the shared variable is done while the mutex is locked then it's not possible for the step 4 to happen between steps 2 and 7, so the condition variable's check for events gets a consistent result. With a mutex used by the publisher and the consumer either the store to s_hasEvent
happens before step 1 (and so the closure loads the value true
and never blocks on the condition variable) or it happens after step 8 (and so the notify_one()
call will wake it up).
回答2:
Found a very good explanation about this issue in another thread. Take a loot at
Questions where asked below about race conditions.
If the data being communicated is atomic, can't we do without the mutex on the "send" side?
at the end.
来源:https://stackoverflow.com/questions/41867228/why-do-i-need-to-acquire-a-lock-to-modify-a-shared-atomic-variable-before-noti