How to make a multiple-read/single-write lock from more basic synchronization primitives?

后端 未结 8 722
[愿得一人]
[愿得一人] 2020-11-29 17:45

We have found that we have several spots in our code where concurrent reads of data protected by a mutex are rather common, while writes are rare. Our measurements seem to s

相关标签:
8条回答
  • 2020-11-29 17:56

    Concurrent reads of data protected by a mutex are rather common, while writes are rare

    That sounds like an ideal scenario for User-space RCU:

    URCU is similar to its Linux-kernel counterpart, providing a replacement for reader-writer locking, among other uses. This similarity continues with readers not synchronizing directly with RCU updaters, thus making RCU read-side code paths exceedingly fast, while furthermore permitting RCU readers to make useful forward progress even when running concurrently with RCU updaters—and vice versa.

    0 讨论(0)
  • 2020-11-29 17:56

    As always the best solution will depend on details. A read-write spin lock may be what you're looking for, but other approaches such as read-copy-update as suggested above might be a solution - though on an old embedded platform the extra memory used might be an issue. With rare writes I often arrange the work using a tasking system such that the writes can only occur when there are no reads from that data structure, but this is algorithm dependent.

    0 讨论(0)
  • 2020-11-29 17:58

    This is a simplified answer based on my Boost headers (I would call Boost an approved way). It only requires Condition Variables and Mutexes. I rewrote it using Windows primitives because I find them descriptive and very simple, but view this as Pseudocode.

    This is a very simple solution, which does not support things like mutex upgrading, or try_lock() operations. I can add those if you want. I also took out some frills like disabling interrupts that aren't strictly necessary.

    Also, it's worth checking out boost\thread\pthread\shared_mutex.hpp (this being based on that). It's human-readable.

    class SharedMutex {
      CRITICAL_SECTION m_state_mutex;
      CONDITION_VARIABLE m_shared_cond;
      CONDITION_VARIABLE m_exclusive_cond;
    
      size_t shared_count;
      bool exclusive;
    
      // This causes write blocks to prevent further read blocks
      bool exclusive_wait_blocked;
    
      SharedMutex() : shared_count(0), exclusive(false)
      {
        InitializeConditionVariable (m_shared_cond);
        InitializeConditionVariable (m_exclusive_cond);
        InitializeCriticalSection (m_state_mutex);
      }
    
      ~SharedMutex() 
      {
        DeleteCriticalSection (&m_state_mutex);
        DeleteConditionVariable (&m_exclusive_cond);
        DeleteConditionVariable (&m_shared_cond);
      }
    
      // Write lock
      void lock(void)
      {
        EnterCriticalSection (&m_state_mutex);
        while (shared_count > 0 || exclusive)
        {
          exclusive_waiting_blocked = true;
          SleepConditionVariableCS (&m_exclusive_cond, &m_state_mutex, INFINITE)
        }
        // This thread now 'owns' the mutex
        exclusive = true;
    
        LeaveCriticalSection (&m_state_mutex);
      }
    
      void unlock(void)
      {
        EnterCriticalSection (&m_state_mutex);
        exclusive = false;
        exclusive_waiting_blocked = false;
        LeaveCriticalSection (&m_state_mutex);
        WakeConditionVariable (&m_exclusive_cond);
        WakeAllConditionVariable (&m_shared_cond);
      }
    
      // Read lock
      void lock_shared(void)
      {
        EnterCriticalSection (&m_state_mutex);
        while (exclusive || exclusive_waiting_blocked)
        {
          SleepConditionVariableCS (&m_shared_cond, m_state_mutex, INFINITE);
        }
        ++shared_count;
        LeaveCriticalSection (&m_state_mutex);
      }
    
      void unlock_shared(void)
      {
        EnterCriticalSection (&m_state_mutex);
        --shared_count;
    
        if (shared_count == 0)
        {
          exclusive_waiting_blocked = false;
          LeaveCriticalSection (&m_state_mutex);
          WakeConditionVariable (&m_exclusive_cond);
          WakeAllConditionVariable (&m_shared_cond);
        }
        else
        {
          LeaveCriticalSection (&m_state_mutex);
        }
      }
    };
    

    Behavior

    Okay, there is some confusion about the behavior of this algorithm, so here is how it works.

    During a Write Lock - Both readers and writers are blocked.

    At the end of a Write Lock - Reader threads and one writer thread will race to see which one starts.

    During a Read Lock - Writers are blocked. Readers are also blocked if and only if a Writer is blocked.

    At the release of the final Read Lock - Reader threads and one writer thread will race to see which one starts.

    This could cause readers to starve writers if the processor frequently context switches over to a m_shared_cond thread before an m_exclusive_cond during notification, but I suspect that issue is theoretical and not practical since it's Boost's algorithm.

    0 讨论(0)
  • 2020-11-29 18:10

    Now that Microsoft has opened up the .NET source code, you can look at their ReaderWRiterLockSlim implementation.

    I'm not sure the more basic primitives they use are available to you, some of them are also part of the .NET library and their code is also available.

    Microsoft has spent quite a lot of time on improving the performance of their locking mechanisms, so this can be a good starting point.

    0 讨论(0)
  • 2020-11-29 18:15

    At first glance I thought I recognized this answer as the same algorithm that Alexander Terekhov introduced. But after studying it I believe that it is flawed. It is possible for two writers to simultaneously wait on m_exclusive_cond. When one of those writers wakes and obtains the exclusive lock, it will set exclusive_waiting_blocked = false on unlock, thus setting the mutex into an inconsistent state. After that, the mutex is likely hosed.

    N2406, which first proposed std::shared_mutex contains a partial implementation, which is repeated below with updated syntax.

    class shared_mutex
    {
        mutex    mut_;
        condition_variable gate1_;
        condition_variable gate2_;
        unsigned state_;
    
        static const unsigned write_entered_ = 1U << (sizeof(unsigned)*CHAR_BIT - 1);
        static const unsigned n_readers_ = ~write_entered_;
    
    public:
    
        shared_mutex() : state_(0) {}
    
    // Exclusive ownership
    
        void lock();
        bool try_lock();
        void unlock();
    
    // Shared ownership
    
        void lock_shared();
        bool try_lock_shared();
        void unlock_shared();
    };
    
    // Exclusive ownership
    
    void
    shared_mutex::lock()
    {
        unique_lock<mutex> lk(mut_);
        while (state_ & write_entered_)
            gate1_.wait(lk);
        state_ |= write_entered_;
        while (state_ & n_readers_)
            gate2_.wait(lk);
    }
    
    bool
    shared_mutex::try_lock()
    {
        unique_lock<mutex> lk(mut_, try_to_lock);
        if (lk.owns_lock() && state_ == 0)
        {
            state_ = write_entered_;
            return true;
        }
        return false;
    }
    
    void
    shared_mutex::unlock()
    {
        {
        lock_guard<mutex> _(mut_);
        state_ = 0;
        }
        gate1_.notify_all();
    }
    
    // Shared ownership
    
    void
    shared_mutex::lock_shared()
    {
        unique_lock<mutex> lk(mut_);
        while ((state_ & write_entered_) || (state_ & n_readers_) == n_readers_)
            gate1_.wait(lk);
        unsigned num_readers = (state_ & n_readers_) + 1;
        state_ &= ~n_readers_;
        state_ |= num_readers;
    }
    
    bool
    shared_mutex::try_lock_shared()
    {
        unique_lock<mutex> lk(mut_, try_to_lock);
        unsigned num_readers = state_ & n_readers_;
        if (lk.owns_lock() && !(state_ & write_entered_) && num_readers != n_readers_)
        {
            ++num_readers;
            state_ &= ~n_readers_;
            state_ |= num_readers;
            return true;
        }
        return false;
    }
    
    void
    shared_mutex::unlock_shared()
    {
        lock_guard<mutex> _(mut_);
        unsigned num_readers = (state_ & n_readers_) - 1;
        state_ &= ~n_readers_;
        state_ |= num_readers;
        if (state_ & write_entered_)
        {
            if (num_readers == 0)
                gate2_.notify_one();
        }
        else
        {
            if (num_readers == n_readers_ - 1)
                gate1_.notify_one();
        }
    }
    

    The algorithm is derived from an old newsgroup posting of Alexander Terekhov. It starves neither readers nor writers.

    There are two "gates", gate1_ and gate2_. Readers and writers have to pass gate1_, and can get blocked in trying to do so. Once a reader gets past gate1_, it has read-locked the mutex. Readers can get past gate1_ as long as there are not a maximum number of readers with ownership, and as long as a writer has not gotten past gate1_.

    Only one writer at a time can get past gate1_. And a writer can get past gate1_ even if readers have ownership. But once past gate1_, a writer still does not have ownership. It must first get past gate2_. A writer can not get past gate2_ until all readers with ownership have relinquished it. Recall that new readers can't get past gate1_ while a writer is waiting at gate2_. And neither can a new writer get past gate1_ while a writer is waiting at gate2_.

    The characteristic that both readers and writers are blocked at gate1_ with (nearly) identical requirements imposed to get past it, is what makes this algorithm fair to both readers and writers, starving neither.

    The mutex "state" is intentionally kept in a single word so as to suggest that the partial use of atomics (as an optimization) for certain state changes is a possibility (i.e. for an uncontended "fast path"). However that optimization is not demonstrated here. One example would be if a writer thread could atomically change state_ from 0 to write_entered then he obtains the lock without having to block or even lock/unlock mut_. And unlock() could be implemented with an atomic store. Etc. These optimizations are not shown herein because they are much harder to implement correctly than this simple description makes it sound.

    0 讨论(0)
  • 2020-11-29 18:16

    It seems like you only have mutex and condition_variable as synchronization primitives. therefore, I write a reader-writer lock here, which starves readers. it uses one mutex, two conditional_variable and three integer.

    readers - readers in the cv readerQ plus the reading reader
    writers - writers in cv writerQ plus the writing writer
    active_writers - the writer currently writing. can only be 1 or 0.
    

    It starve readers in this way. If there are several writers want to write, readers will never get the chance to read until all writers finish writing. This is because later readers need to check writers variable. At the same time, the active_writers variable will guarantee that only one writer could write at a time.

    class RWLock {
    public:
        RWLock()
        : shared()
        , readerQ(), writerQ()
        , active_readers(0), waiting_writers(0), active_writers(0)
        {}
    
        void ReadLock() {
            std::unique_lock<std::mutex> lk(shared);
            while( waiting_writers != 0 )
                readerQ.wait(lk);
            ++active_readers;
            lk.unlock();
        }
    
        void ReadUnlock() {
            std::unique_lock<std::mutex> lk(shared);
            --active_readers;
            lk.unlock();
            writerQ.notify_one();
        }
    
        void WriteLock() {
            std::unique_lock<std::mutex> lk(shared);
            ++waiting_writers;
            while( active_readers != 0 || active_writers != 0 )
                writerQ.wait(lk);
            ++active_writers;
            lk.unlock();
        }
    
        void WriteUnlock() {
            std::unique_lock<std::mutex> lk(shared);
            --waiting_writers;
            --active_writers;
            if(waiting_writers > 0)
                writerQ.notify_one();
            else
                readerQ.notify_all();
            lk.unlock();
        }
    
    private:
        std::mutex              shared;
        std::condition_variable readerQ;
        std::condition_variable writerQ;
        int                     active_readers;
        int                     waiting_writers;
        int                     active_writers;
    };
    
    0 讨论(0)
提交回复
热议问题