Lock-free cache implementation in C++11

半城伤御伤魂 提交于 2019-12-22 12:01:53

问题


Is there any way in C++11 to implement a lock-free cache for an object, which would be safe to access from multiple threads? The calculation I'm looking to cache isn't super cheap but also isn't super expensive, so requiring a lock would defeat the purpose of caching in my case. IIUC, std::atomic isn't guaranteed to be lock-free.

Edit: Since calculate isn't -too- expensive, I actually don't mind if it runs once or twice too many. But I -do- need to make sure all consumers get the correct value. In the naive example below, this isn't guaranteed because due to memory re-ordering it's possible for a thread to get an uninitialized value of m_val since another thread set m_alreadyCalculated to true, but didn't set m_val's value yet.

Edit2: The comments below point out that for basic types, std::atomic would probably be lock free. In case it is, what's the correct way in the example below of using C++11's memory ordering to make sure it isn't possible for m_alreadyCalculated to be set to true before m_val's value is set?

Non-thread-safe cache example:

class C {
public:
   C(int param) : m_param(param) {}

   getValue() {
      if (!m_alreadyCalculated) {
          m_val = calculate(m_param);
          m_alreadyCalculated = true;
      }
      return m_val;
   }

   double calculate(int param) {
       // Some calculation
   }

private:
   int m_param;
   double m_val;
   bool m_alreadyCalculated = false;
}

回答1:


Consider something as:

class C {
public:
   double getValue() {
      if (alreadyCalculated == true)
         return m_val;

      bool expected = false;
      if (calculationInProgress.compare_exchange_strong(expected, true)) {
         m_val = calculate(m_param);
         alreadyCalculated = true;
      // calculationInProgress = false;
      }
      else {
     //  while (calculationInProgress == true)
         while (alreadyCalculated == false)
            ; // spin
      }
      return m_val;
   }

private:
   double m_val;
   std::atomic<bool> alreadyCalculated {false};
   std::atomic<bool> calculationInProgress {false};
};

It's not in fact lock-free, there is a spin lock inside. But I think you cannot avoid such a lock if you don't want to run calculate() by multiple threads.

getValue() gets more complicated here, but the important part is that once m_val is calculated, it will always return immediately in the first if statement.

UPDATE

For performance reasons, it might also be a good idea do pad the whole class to a cache line size.

UPDATE 2

There was a bug in the original answer, thanks JVApen to pointing this out (it's marked by comments). The variable calculationInProgress would be better renamed to something as calculationHasStarted.

Also, please note that this solution assumes that calculate() does not throw an exception.




回答2:


std::atomic is not guaranteed to be lock-free, though you can check on std::atomic<T>::is_lock_free() or std::atomic::is_always_lock_free() to see if your implementation can do this lockfree.

Another approach could be using std::call_once, however from my understanding this is even worse as it is meant to block the other threads.

So, in this case, I would go with the std::atomic for both m_val and alreadyCalculated. Which contains the risk that 2 (or more) threads are calculating the same result.




回答3:


Just to answer the one technical question here: To make sure the value is updated before the flag, you update the flag with release semantics. The meaning of release semantics is that this update must (be seen as) happen after all previous ones. On x86 it only means a compiler barrier before the update, and making the update to memory, not register, like this:

asm volatile("":::"memory");
*(volatile bool*)&m_alreadyCalculated = true;

And this is exactly what the atomic set is doing in release semantics



来源:https://stackoverflow.com/questions/36239880/lock-free-cache-implementation-in-c11

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!