读写锁的简单实现


layout: post title: 读写锁 categories: cpp_concurrency description: C++并发编程简介 keywords: c++, 并发编程,读写锁


  • keywords: c++, 并发编程,读写锁
  • 读写锁实现思路
  • 公平锁
  • 读优先/写优先
  • 借助C++标准库实现写优先所
  • boost库的读写锁

读写锁可以分为:公平锁,读优先,写优先,优先级锁等。Linux系统提供了pthread_rwlock系列函数作为读写锁的实现,同样的Boost库提供了share_lock作为读写锁实现的辅助类。C++标准库没有提供读写锁,但是我们可以使用mutex和condition_variable来很容易的实现读写锁。

  • boost::shared_lock
  • std::unique_lock
  • std::lock_guard
  • pthread_rwlock_init
  • pthread_rwlock_destroy
  • pthread_rwlock_rdlock
  • pthread_rwlock_wrlock
  • pthread_rwlock_unlock

读写锁实现思路

  • 公平锁:实用队列来管理锁,先到先得
  • 读优先:这种场合用于写少读多的情况,只要有读请求则写请求永远等待
  • 写优先:这种场合和读优先相反,只要有写请求,则读永远被阻塞
  • 优先级锁:带有优先级的锁,优先级高的锁先获取资源,可以使用set管理请求资源的锁,并按照优先级排序

公平锁

其实,标准库提供的mutex就是一种公平锁,因为被唤醒的线程是随机的。如果强调真正意义的公平,则可以使用队列来管理锁,只有处于队列头的锁才能获取资源。其实Linux系统已经实现了公平锁——在pthread_mutex初始化时传入参数mutexattr,其包含如下几种:

  • PTHREAD_MUTEX_TIMED_NP ,这是缺省值,也就是普通锁。当一个线程加锁以后,其余请求锁的线程将形成一个等待队列,并在解锁后按优先级获得锁。这种锁策略保证了资源分配的公平性。
  • PTHREAD_MUTEX_RECURSIVE_NP,嵌套锁,允许同一个线程对同一个锁成功获得多次,并通过多次unlock解锁。如果是不同线程请求,则在加锁线程解锁时重新竞争。
  • PTHREAD_MUTEX_ERRORCHECK_NP,检错锁,如果同一个线程请求同一个锁,则返回EDEADLK,否则与PTHREAD_MUTEX_TIMED_NP类型动作相同。这样就保证当不允许多次加锁时不会出现最简单情况下的死锁。
  • PTHREAD_MUTEX_ADAPTIVE_NP,适应锁,动作最简单的锁类型,仅等待解锁后重新竞争。

因此,只需要在创建mutex的时候指定 PTHREAD_MUTEX_TIMED_NP 属性即可。

读优先/写优先

读优先和写优先的本质都是一样的。在Linux中有线程的读写锁,同样的也可以指定读写锁的属性。

int pthread_rwlock_init(pthread_rwlock_t *restrict rwlock,
              const pthread_rwlockattr_t *restrict attr);

attr 共有 3 种选择

  • PTHREAD_RWLOCK_PREFER_READER_NP (默认设置) 读者优先,可能会导致写者饥饿情况
  • PTHREAD_RWLOCK_PREFER_WRITER_NP 写者优先,目前有 BUG,导致表现行为和 PTHREAD_RWLOCK_PREFER_READER_NP 一致
  • PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP 写者优先,但写者不能递归加锁

借助C++标准库实现写优先所

这里用C++的mutex和condition_variable实现一个写优先的锁。满足以下逻辑:

  • 当读锁占用锁的时候,其他线程也可以获取读锁
  • 当写锁占用锁的时候,其他任何线程既不可以获取读锁,也不可以获取写锁
  • 当有写锁在等待的时候,优先唤醒写锁
  • 点我看源码
  • 点我看源码
class write_priotity_lock
{
public:
    void read_lock()
    {
        std::unique_lock<std::mutex> lock(m_mutex);
        m_read_cv.wait(lock,[this](){
            return this->m_write_count == 0;
        });

        m_read_count++;
    }

    void write_lock()
    {
        std::unique_lock<std::mutex> lock(m_mutex);
        m_write_count++;
        m_write_cv.wait(lock,[this](){
            return this->m_write_count <= 1 && this->m_read_count == 0;
        });
    }

    void read_release()
    {
        std::unique_lock<std::mutex> lock(m_mutex); // <-- 此处的锁是必须的!!
        --m_read_count;
        if(m_read_count == 0 && m_write_count > 0)
        {
            m_write_cv.notify_one();
        }
    }

    void write_release()
    {
        std::unique_lock<std::mutex> lock(m_mutex); // <-- 此处的锁是必须的!!
        --m_write_count;
        if(m_write_count >= 1)
        {
            m_write_cv.notify_one(); // 唤醒一个等待的写条件变量
        }
        else
        {
            m_read_cv.notify_all(); // 唤醒所有等待的写条件变量
        }
    }

private:
    std::condition_variable m_write_cv;
    std::condition_variable m_read_cv;
    int32_t m_read_count;
    int32_t m_write_count;
    std::mutex m_mutex;
};

借助C++标准库实现写优先所--V2

对第一版中关于多个写锁等待时条件竞争导致的死锁问题进行优化:

  1. 增加变量m_write_own:表示此时是否有实例拥有写锁
  2. 使用变量m_write_wait_count:表示此时正在等待获取锁的实例个数
  3. 将原子变量该为普通变量,因为已经由互斥锁进行了保护
  4. 在进行notify的时候加锁,更加符合并发编程规范


#include <thread>
#include <vector>
#include <iostream>
#include <pthread.h>
#include <mutex>
#include <condition_variable>
#include <memory>
#include <atomic>

#define DEBUG_WR_LOCK 1

class write_priotity_lock
{
public:
    void read_lock()
    {
        std::unique_lock<std::mutex> lock(m_mutex);
        m_read_cv.wait(lock, [this]()
                       { return !this->m_write_own && this->m_write_wait_count == 0; });

        m_read_count++;
        print_debug("read_lock");
    }

    void print_debug(const char *msg) const
    {
#if DEBUG_WR_LOCK
        auto timestamp = std::chrono::steady_clock::now().time_since_epoch().count();
        std::cout << timestamp << "," << msg << ",read count:" << m_read_count << ",write own:" << std::boolalpha << m_write_own << ",write wait count:" << m_write_wait_count << std::endl;
#endif
    }

    void write_lock()
    {
        std::unique_lock<std::mutex> lock(m_mutex);
        m_write_wait_count++;
        m_write_cv.wait(lock, [this]()
                        { return !this->m_write_own && this->m_read_count == 0; });
        m_write_own = true;
        m_write_wait_count--;
        print_debug("write_lock");
    }

    void read_release()
    {
        std::unique_lock<std::mutex> lock(m_mutex);
        --m_read_count;
        print_debug("read_release");
        if (m_read_count == 0 && m_write_wait_count > 0)
        {
            m_write_cv.notify_one();
        }
    }

    void write_release()
    {
        std::unique_lock<std::mutex> lock(m_mutex);
        m_write_own = false;
        if (m_write_wait_count >= 1)
        {
            m_write_cv.notify_one();
        }
        else
        {
            m_read_cv.notify_all();
        }
        print_debug("write_release");
    }

private:
    std::condition_variable m_write_cv;
    std::condition_variable m_read_cv;
    int32_t m_read_count{0};
    int32_t m_write_wait_count{0};
    bool m_write_own{false};
    std::mutex m_mutex;
};

int main()
{
    write_priotity_lock wp;

    std::vector<std::thread> threads;
    threads.emplace_back([&wp]()
                         { wp.read_lock(); std::this_thread::sleep_for(std::chrono::seconds(1)); wp.read_release(); });

    threads.emplace_back([&wp]()
                         {
                            std::this_thread::sleep_for(std::chrono::milliseconds(500));
                            wp.read_lock();  wp.read_release(); });

    threads.emplace_back([&wp]()
                         { std::this_thread::sleep_for(std::chrono::seconds(1));
                         wp.write_lock(); wp.write_release(); });

    threads.emplace_back([&wp]()
                         { std::this_thread::sleep_for(std::chrono::seconds(1));
                         wp.read_lock(); wp.read_release(); });

    threads.emplace_back([&wp]()
                         { std::this_thread::sleep_for(std::chrono::seconds(1));
                         wp.write_lock(); wp.write_release(); });

    threads.emplace_back([&wp]()
                         { std::this_thread::sleep_for(std::chrono::seconds(1));
                         wp.write_lock();  wp.write_release(); });

    for (auto &t : threads)
    {
        t.join();
    }
    wp.print_debug("main thread");

    return 0;
}

测试结果如下:

[C++11与并发编程]读写锁的简单实现_并发编程

boost库的读写锁

boost库的共享mutex实际上很简单,它是由条件变量、普通的mutex构成的。shared_mutex维护了2个条件变量来判断是否可读、可写。按照以下原则进行加锁和解锁:

  • 写标志位:unsigned int 最高位为1则表示已经进行写加锁
  • 读标志位:unsigned int 最高位以外的所有为不为1则表示存在读锁,读锁的个数就是该 unsigned int 变量的值
  • 写加锁 :当不存在写锁、读锁的时候可以获取写锁,否则一直等待
  • 读加锁 :不存在写锁的时候可以进行加锁,否则一直等待。读锁可以多次加锁,每加一次则读锁计数加一
  • 锁释放 :写锁释放时
  • shared_lock: boost::shared_mutex 可以配合 boost::unique_lock boost::guard_lock 以及 shared_lock 使用。惟一的区别是,shared_lock 是获取读锁,其他的是获取写锁。

存在的问题:

  • 写锁饥饿:由于写锁需要等待所有读锁解锁完毕后才能获取,因此当一个线程尝试获取写锁的时候,其他的线程一直进行读锁获取,会导致写锁饥饿的情况。按照boost的共享锁的实现,它是一个读优先锁
  • 锁唤醒 :当尝试加写锁的线程由于不满足条件而进行条件变量wait之后,其他线程释放写锁之后由于shared_mutex中没有写锁等待的计数因此不知道是否有写锁存在,因此只能选择唤醒所有其他的条件变量。更好的办法是对当前读写锁分别进行计数,根据读写锁的数量来选择性的唤醒不同的锁。由于有了读写计数,则可以很方便的选择唤醒哪个条件变量,这样还可以实现读写优先锁。写优先级的读写锁可以参见《读写锁、自旋锁、信号量的CPP11实现》。
namespace boost {
  namespace thread_v2 {

    class shared_mutex
    {
      typedef boost::mutex              mutex_t;
      typedef boost::condition_variable cond_t;
      typedef unsigned                  count_t;

      mutex_t mut_;
      cond_t  gate1_;
      // the gate2_ condition variable is only used by functions that
      // have taken write_entered_ but are waiting for no_readers()
      cond_t  gate2_;
      count_t state_;

      static const count_t write_entered_ = 1U << (sizeof(count_t)*CHAR_BIT - 1);
      static const count_t n_readers_ = ~write_entered_;

      bool no_writer() const
      {
        return (state_ & write_entered_) == 0;
      }

      bool one_writer() const
      {
        return (state_ & write_entered_) != 0;
      }

      bool no_writer_no_readers() const
      {
        //return (state_ & write_entered_) == 0 &&
        //       (state_ & n_readers_) == 0;
        return state_ == 0;
      }

      bool no_writer_no_max_readers() const
      {
        return (state_ & write_entered_) == 0 &&
               (state_ & n_readers_) != n_readers_;
      }

      bool no_readers() const
      {
        return (state_ & n_readers_) == 0;
      }

      bool one_or_more_readers() const
      {
        return (state_ & n_readers_) > 0;
      }

      shared_mutex(shared_mutex const&);
      shared_mutex& operator=(shared_mutex const&);

    public:
      shared_mutex();
      ~shared_mutex();

      // Exclusive ownership

      void lock();
      bool try_lock();
#ifdef BOOST_THREAD_USES_CHRONO
      template <class Rep, class Period>
      bool try_lock_for(const boost::chrono::duration<Rep, Period>& rel_time)
      {
        return try_lock_until(chrono::steady_clock::now() + rel_time);
      }
      template <class Clock, class Duration>
      bool try_lock_until(
          const boost::chrono::time_point<Clock, Duration>& abs_time);
#endif
#if defined BOOST_THREAD_USES_DATETIME
      template<typename T>
      bool timed_lock(T const & abs_or_rel_time);
#endif
      void unlock();

      // Shared ownership

      void lock_shared();
      bool try_lock_shared();
#ifdef BOOST_THREAD_USES_CHRONO
      template <class Rep, class Period>
      bool try_lock_shared_for(const boost::chrono::duration<Rep, Period>& rel_time)
      {
        return try_lock_shared_until(chrono::steady_clock::now() + rel_time);
      }
      template <class Clock, class Duration>
      bool try_lock_shared_until(
          const boost::chrono::time_point<Clock, Duration>& abs_time);
#endif
#if defined BOOST_THREAD_USES_DATETIME
      template<typename T>
      bool timed_lock_shared(T const & abs_or_rel_time);
#endif
      void unlock_shared();
    };

    inline shared_mutex::shared_mutex()
    : state_(0)
    {
    }

    inline shared_mutex::~shared_mutex()
    {
      boost::lock_guard<mutex_t> _(mut_);
    }

    // Exclusive ownership

    inline void shared_mutex::lock()
    {
      boost::unique_lock<mutex_t> lk(mut_);
      gate1_.wait(lk, boost::bind(&shared_mutex::no_writer, boost::ref(*this)));
      state_ |= write_entered_;
      gate2_.wait(lk, boost::bind(&shared_mutex::no_readers, boost::ref(*this)));
    }

    inline bool shared_mutex::try_lock()
    {
      boost::unique_lock<mutex_t> lk(mut_);
      if (!no_writer_no_readers())
      {
        return false;
      }
      state_ = write_entered_;
      return true;
    }

#ifdef BOOST_THREAD_USES_CHRONO
    template <class Clock, class Duration>
    bool shared_mutex::try_lock_until(
        const boost::chrono::time_point<Clock, Duration>& abs_time)
    {
      boost::unique_lock<mutex_t> lk(mut_);
      if (!gate1_.wait_until(lk, abs_time, boost::bind(
            &shared_mutex::no_writer, boost::ref(*this))))
      {
        return false;
      }
      state_ |= write_entered_;
      if (!gate2_.wait_until(lk, abs_time, boost::bind(
            &shared_mutex::no_readers, boost::ref(*this))))
      {
        state_ &= ~write_entered_;
        return false;
      }
      return true;
    }
#endif

#if defined BOOST_THREAD_USES_DATETIME
    template<typename T>
    bool shared_mutex::timed_lock(T const & abs_or_rel_time)
    {
      boost::unique_lock<mutex_t> lk(mut_);
      if (!gate1_.timed_wait(lk, abs_or_rel_time, boost::bind(
            &shared_mutex::no_writer, boost::ref(*this))))
      {
        return false;
      }
      state_ |= write_entered_;
      if (!gate2_.timed_wait(lk, abs_or_rel_time, boost::bind(
            &shared_mutex::no_readers, boost::ref(*this))))
      {
        state_ &= ~write_entered_;
        return false;
      }
      return true;
    }
#endif

    inline void shared_mutex::unlock()
    {
      boost::lock_guard<mutex_t> _(mut_);
      BOOST_ASSERT(one_writer());
      BOOST_ASSERT(no_readers());
      state_ = 0;
      // notify all since multiple *lock_shared*() calls may be able
      // to proceed in response to this notification
      gate1_.notify_all();
    }

    // Shared ownership

    inline void shared_mutex::lock_shared()
    {
      boost::unique_lock<mutex_t> lk(mut_);
      gate1_.wait(lk, boost::bind(&shared_mutex::no_writer_no_max_readers, boost::ref(*this)));
      count_t num_readers = (state_ & n_readers_) + 1;
      state_ &= ~n_readers_;
      state_ |= num_readers;
    }

    inline bool shared_mutex::try_lock_shared()
    {
      boost::unique_lock<mutex_t> lk(mut_);
      if (!no_writer_no_max_readers())
      {
        return false;
      }
      count_t num_readers = (state_ & n_readers_) + 1;
      state_ &= ~n_readers_;
      state_ |= num_readers;
      return true;
    }

#ifdef BOOST_THREAD_USES_CHRONO
    template <class Clock, class Duration>
    bool shared_mutex::try_lock_shared_until(
        const boost::chrono::time_point<Clock, Duration>& abs_time)
    {
      boost::unique_lock<mutex_t> lk(mut_);
      if (!gate1_.wait_until(lk, abs_time, boost::bind(
            &shared_mutex::no_writer_no_max_readers, boost::ref(*this))))
      {
        return false;
      }
      count_t num_readers = (state_ & n_readers_) + 1;
      state_ &= ~n_readers_;
      state_ |= num_readers;
      return true;
    }
#endif

#if defined BOOST_THREAD_USES_DATETIME
    template<typename T>
    bool shared_mutex::timed_lock_shared(T const & abs_or_rel_time)
    {
      boost::unique_lock<mutex_t> lk(mut_);
      if (!gate1_.timed_wait(lk, abs_or_rel_time, boost::bind(
            &shared_mutex::no_writer_no_max_readers, boost::ref(*this))))
      {
        return false;
      }
      count_t num_readers = (state_ & n_readers_) + 1;
      state_ &= ~n_readers_;
      state_ |= num_readers;
      return true;
    }
#endif

    inline void shared_mutex::unlock_shared()
    {
      boost::lock_guard<mutex_t> _(mut_);
      BOOST_ASSERT(one_or_more_readers());
      count_t num_readers = (state_ & n_readers_) - 1;
      state_ &= ~n_readers_;
      state_ |= num_readers;
      if (no_writer())
      {
        if (num_readers == n_readers_ - 1)
          gate1_.notify_one();
      }
      else
      {
        if (num_readers == 0)
          gate2_.notify_one();
      }
    }

  }  // thread_v2
}  // boost