disruptor的使用

一、消费者读取数据步骤

  • 注册消费者,此时每个消费者会返回一个可读的消费者索引index_for_customer_use;
  • 使用index_for_customer_use在共享内存环形队列上等待,直到该索引位置可读,将会返回一个新的索引cursor,此时[index_for_customer_use,cursor]的数据都是可读的;
  • 通过GetData(index)获取共享内存中的用户数据;
  • 使用CommitRead()函数更新共享内存状态管理其中id对应的索引,表示该id的消费者已经被消费;
  • 将index_for_customer_use累加,重复以上的步骤;

二、生产者产生数据步骤

  • 使用ClaimIndex函数获取下一个可写的buffer所在的索引;
  • 数据写入
  • 更新共享内存环形队列索引成员变量

个人认为disruptor主要有以下几个缺陷:

  1. 生产者缺少一次获取多个buffer的接口。生产者不能一次获取多个buffer,如果要一次性写入多个数据,则必须多次调用claimindex,不仅效率低下,而且对上层调用者不太友好;
  2. ==消费者注册的索引必须是从0开始按1进行递增的,不能随机,不仅限制了上层应用的使用,而且如果不按此方式进行递增,可能会造成获取生产者索引时的陷入无限等待。如果将id和index修改成键值对进行映射的关系,则可以不必循环访问数组。如果再加一个变量用来存放最小的消费者索引,则可以避免在获取生产者索引时遍历整个消费者辅助数组(或者将数组改成set也行);==
  3. 多线程测试程序中,在进程结束时,没有删除共享内存。应该在进程结束前调用TerminateRingBuffer;
  4. ==消费者读取数据后需要手动更新读索引,生产者在写入数据后需要调用另外的接口更新写索引。不管是消费者还是生产者,这两步都应该有一个接口可以一次性的获取或写入数据,并且更新索引。如果每次都要求显示的调用更新接口,那么肯定会存在忘记调用该接口的情况,不太友好;==
  5. ==disruptor自己的SharedMemRingBuffer仅仅支持OneBufferData类型进行,应该设计成模板或者其中的data由int64_t改成void *以接收任何类型的数据;==

三、测试样例

#include <iostream>
#include <atomic>
#include <thread>
#include <mutex>
#include <fstream>
#include <random>
#include <sstream>

#include "../../ring_buffer_on_shmem.hpp"
#include "../../shared_mem_manager.hpp"
#include "../../atomic_print.hpp"
#include "../../elapsed_time.hpp"

SharedMemRingBuffer g_shared_mem_ring_buffer(BLOCKING_WAIT);

void ThreadWorkWrite(std::string tid, size_t my_id) {
    int64_t my_index = -1;
    while (true) {
        OneBufferData my_data;
        my_index = g_shared_mem_ring_buffer.ClaimIndex(my_id);
        my_data.producer_id = my_id;
        my_data.data = std::rand() % 1000 + 99;
        g_shared_mem_ring_buffer.SetData(my_index, &my_data);
        g_shared_mem_ring_buffer.Commit(my_id, my_index);

        {
            std::stringstream ss;
            ss << "ThreadWorkWrite: id = " << my_id << ",data = " << my_data.data << "commit id: " << my_index << std::endl;
            AtomicPrint ap(ss.str());
        }

        usleep(my_data.data % 100 + 1);
    }
}

void ThreadWorkRead(std::string tid, size_t my_id, int64_t index_for_consumer_use) {
    int64_t index = index_for_consumer_use;

    while (true) {
        int64_t ret_index = g_shared_mem_ring_buffer.WaitFor(my_id, index);
        for (int64_t i = index; i <= ret_index; ++i) {
            OneBufferData* pData = g_shared_mem_ring_buffer.GetData(i);
            {
                std::stringstream ss;
                ss << "ThreadWorkRead consumer: id = " << pData->producer_id << ", data = " << pData->data << std::endl;
                AtomicPrint ap(ss.str());
            }
            
            usleep(pData->data % 100 + 1);
        }
        index++;
    }
}


void TestFunc(size_t consumer_cap, size_t producer_cap) {
    std::vector<std::thread> consumer_threads;
    std::vector<std::thread> producer_threads;

    //01: consumer 
    // 01-1: index register
    std::vector<int64_t> vec_consumer_indexes;
    for ( size_t i = 0; i < consumer_cap; ++i ) {
        int64_t index_for_consumer_use = -1;

        if (i == 0 ) {
            continue;
        }

        if (!g_shared_mem_ring_buffer.RegisterConsumer(i, &index_for_consumer_use)) {
            return; 
        }

        {
            std::stringstream ss;
            ss << "index_for_consumer_use = " << index_for_consumer_use << std::endl;
            DEBUG_LOG(ss);
        }
        
        vec_consumer_indexes.push_back(index_for_consumer_use);

    }


    //02: run consumer threads.
    for (size_t i = 1; i < consumer_cap; ++i) {
        consumer_threads.push_back(std::thread(ThreadWorkRead, "consumer", i, vec_consumer_indexes[i] ));
    }

    //03: run producer threads.
    for (size_t i = 0; i < producer_cap; ++i) {
        producer_threads.push_back(std::thread (ThreadWorkWrite, "producer", i));
    }

    for (size_t i = 0; i < producer_threads.size(); ++i) {
        producer_threads[i].join();
    }

    for (size_t i = 0; i < consumer_threads.size(); ++i) {
        consumer_threads[i].join();
    }

    g_shared_mem_ring_buffer.TerminateRingBuffer();
}

int main( int argc, char* argv[]) {
    //01: ring buffer capacity. 
    int64_t buffer_cap = 8;

    //02: init ring buffer
    if (!g_shared_mem_ring_buffer.InitRingBuffer(buffer_cap)) {
        printf ("Init RingBuffer failed, process exiting...");
        return -1;
    }

    //03: test shared mem ring buffer.
    //producer cap: 10
    //consumer cap: 1
    TestFunc(10, 1);

    return 0;
}

由于共享内存环形队列结构体中的array_of_consumer_indexs 辅助数组的0号下标的消费者下标没有注册,那么0号下标的消费者下标永远为-1;那么当生产者下标大于cap的时候,获取的最小的消费者下标用于为-1.所以就会一直陷入等待状态;