由于工作的原因,需要在RocketMQ源码的基础上进行修改,实现自己的功能,因此打算读读源码,写写日记,后续会使用aeron替换掉netty,比较两者的性能指标。

RocketMQ主要包括NameSrv、Broker、Producer和Consumer四个模块,模块之间通过Netty进行底层网络传输,因此,RocketMQ的线程模型也是根据Netty的特点进行构建。

RocketMQ对Netty的封装

RocketMQ在原生Netty的基础上,封装了自己的Server和Client,分别为NettyRemotingServer和NettyRemotingClient,该部分的类关系如下图所示。其中NettyRemotingAbstract定义了消息在通过Netty接收并编解码为消息后的一系列业务操作;NettyRemotingServer和NettyRemotingClient是分别对Netty Server和Netty Client的接口封装。

springboot rocketmq 消费者 多线程 拉去分区有序 rocketmq线程模型_java

以下的讲解需要首先了解一下Netty的基本使用方式,建议同学先去熟悉一下

Netty的客户端或者服务端在收到消息时,消息的数据流会流经在PipeLine上定义的所有ChannelHandler。以上文的NettyRemotingServer为例,以下代码创建了handshakeHandler、encoder、NettyDecoder、IdleStateHandler、connectionManageHandler和serverHandler总共六个ChannelHandler:

ServerBootstrap childHandler =
            this.serverBootstrap.group(this.eventLoopGroupBoss, this.eventLoopGroupSelector)
                .channel(useEpoll() ? EpollServerSocketChannel.class : NioServerSocketChannel.class)
                .option(ChannelOption.SO_BACKLOG, 1024)
                .option(ChannelOption.SO_REUSEADDR, true)
                .option(ChannelOption.SO_KEEPALIVE, false)
                .childOption(ChannelOption.TCP_NODELAY, true)
                .childOption(ChannelOption.SO_SNDBUF, nettyServerConfig.getServerSocketSndBufSize())
                .childOption(ChannelOption.SO_RCVBUF, nettyServerConfig.getServerSocketRcvBufSize())
                .localAddress(new InetSocketAddress(this.nettyServerConfig.getListenPort()))
                .childHandler(new ChannelInitializer<SocketChannel>() {
                    @Override
                    public void initChannel(SocketChannel ch) throws Exception {
                        ch.pipeline()
                            .addLast(defaultEventExecutorGroup, HANDSHAKE_HANDLER_NAME, handshakeHandler)
                            .addLast(defaultEventExecutorGroup,
                                encoder,
                                new NettyDecoder(),
                                new IdleStateHandler(0, 0, nettyServerConfig.getServerChannelMaxIdleTimeSeconds()),
                                connectionManageHandler,
                                serverHandler
                            );
                    }
                });

上述六个ChannelHandler连接成了一条处理链,实现对二进制消息流的连接检测、编解码和具体的消息业务处理操作。每个ChannelHandler在收到输入信息时,都会触发自己的channelRead0方法,如以下代码是NettyServerHandler的实现,RocketMQ收到消息后的业务处理逻辑,实际上都写在channelRead0方法里:

@ChannelHandler.Sharable
    class NettyServerHandler extends SimpleChannelInboundHandler<RemotingCommand> {

        @Override
        protected void channelRead0(ChannelHandlerContext ctx, RemotingCommand msg) throws Exception {
            processMessageReceived(ctx, msg);
        }
    }

RocketMQ的线程组织

RocketMQ的模块是收到多种消息类型,并根据不同的消息类型进行不同的处理逻辑,因此,针对不同的处理逻辑,RocketMQ定义了不同的processor对象,每个processor定义了一种或若干种消息类型的处理逻辑,并且,为每个processor都绑定了线程池用于处理代码的执行。如下为broker服务器的processor与线程池的绑定,registerProcessor方法指定了processor处理的消息类型,以及执行该processor代码的线程池,这些信息以键值对的形式,维护在NettyRemotingServer的processorTable里。

public void registerProcessor() {
        /**
         * SendMessageProcessor
         */
        SendMessageProcessor sendProcessor = new SendMessageProcessor(this);
        sendProcessor.registerSendMessageHook(sendMessageHookList);
        sendProcessor.registerConsumeMessageHook(consumeMessageHookList);

        this.remotingServer.registerProcessor(RequestCode.SEND_MESSAGE, sendProcessor, this.sendMessageExecutor);
        this.remotingServer.registerProcessor(RequestCode.SEND_MESSAGE_V2, sendProcessor, this.sendMessageExecutor);
        this.remotingServer.registerProcessor(RequestCode.SEND_BATCH_MESSAGE, sendProcessor, this.sendMessageExecutor);
        this.remotingServer.registerProcessor(RequestCode.CONSUMER_SEND_MSG_BACK, sendProcessor, this.sendMessageExecutor);
        this.fastRemotingServer.registerProcessor(RequestCode.SEND_MESSAGE, sendProcessor, this.sendMessageExecutor);
        this.fastRemotingServer.registerProcessor(RequestCode.SEND_MESSAGE_V2, sendProcessor, this.sendMessageExecutor);
        this.fastRemotingServer.registerProcessor(RequestCode.SEND_BATCH_MESSAGE, sendProcessor, this.sendMessageExecutor);

在指定了每种消息类型对应的processor和线程池之后,当收到一个消息输入时,即可根据消息类型,选择对应的processor,将消息处理任务提交至对应的线程池中进行处理。该部分代码在NettyRemotingAbstract类的processRequestCommand方法中,该方法还定义了一个回调类RemotingResponseCallback,当processor的消息处理代码执行完成后,即调用回调类的callback方法,将消息处理结果写回给消息发送方。

public void processRequestCommand(final ChannelHandlerContext ctx, final RemotingCommand cmd) {
        final Pair<NettyRequestProcessor, ExecutorService> matched = this.processorTable.get(cmd.getCode());
        final Pair<NettyRequestProcessor, ExecutorService> pair = null == matched ? this.defaultRequestProcessor : matched;
        final int opaque = cmd.getOpaque();

        if (pair != null) {
            Runnable run = new Runnable() {
                @Override
                public void run() {
                    try {
                        doBeforeRpcHooks(RemotingHelper.parseChannelRemoteAddr(ctx.channel()), cmd);
                        final RemotingResponseCallback callback = new RemotingResponseCallback() {
                            @Override
                            public void callback(RemotingCommand response) {
                                doAfterRpcHooks(RemotingHelper.parseChannelRemoteAddr(ctx.channel()), cmd, response);
                                if (!cmd.isOnewayRPC()) {
                                    if (response != null) {
                                        response.setOpaque(opaque);
                                        response.markResponseType();
                                        try {
                                            ctx.writeAndFlush(response);
                                        } catch (Throwable e) {
                                            log.error("process request over, but response failed", e);
                                            log.error(cmd.toString());
                                            log.error(response.toString());
                                        }
                                    } else {
                                    }
                                }
                            }
                        };
                        if (pair.getObject1() instanceof AsyncNettyRequestProcessor) {
                            AsyncNettyRequestProcessor processor = (AsyncNettyRequestProcessor)pair.getObject1();
                            processor.asyncProcessRequest(ctx, cmd, callback);
                        } else {
                            NettyRequestProcessor processor = pair.getObject1();
                            RemotingCommand response = processor.processRequest(ctx, cmd);
                            doAfterRpcHooks(RemotingHelper.parseChannelRemoteAddr(ctx.channel()), cmd, response);
                            callback.callback(response);
                        }
                    } catch (Throwable e) {
                        log.error("process request exception", e);
                        log.error(cmd.toString());

                        if (!cmd.isOnewayRPC()) {
                            final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_ERROR,
                                RemotingHelper.exceptionSimpleDesc(e));
                            response.setOpaque(opaque);
                            ctx.writeAndFlush(response);
                        }
                    }
                }
            };

            if (pair.getObject1().rejectRequest()) {
                final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_BUSY,
                    "[REJECTREQUEST]system busy, start flow control for a while");
                response.setOpaque(opaque);
                ctx.writeAndFlush(response);
                return;
            }

            try {
                final RequestTask requestTask = new RequestTask(run, ctx.channel(), cmd);
                pair.getObject2().submit(requestTask);
            } catch (RejectedExecutionException e) {
                if ((System.currentTimeMillis() % 10000) == 0) {
                    log.warn(RemotingHelper.parseChannelRemoteAddr(ctx.channel())
                        + ", too many requests and system thread pool busy, RejectedExecutionException "
                        + pair.getObject2().toString()
                        + " request code: " + cmd.getCode());
                }

                if (!cmd.isOnewayRPC()) {
                    final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_BUSY,
                        "[OVERLOAD]system busy, start flow control for a while");
                    response.setOpaque(opaque);
                    ctx.writeAndFlush(response);
                }
            }
        } else {
            String error = " request type " + cmd.getCode() + " not supported";
            final RemotingCommand response =
                RemotingCommand.createResponseCommand(RemotingSysResponseCode.REQUEST_CODE_NOT_SUPPORTED, error);
            response.setOpaque(opaque);
            ctx.writeAndFlush(response);
            log.error(RemotingHelper.parseChannelRemoteAddr(ctx.channel()) + error);
        }
    }

总结

综上,RocketMQ处理消息的接收、处理和结果返回的大致流程如下图。RocketMQ各个模块的线程模型也较为简单,定义若干个ExecuteService用于执行各类消息的业务处理代码,而具体的业务处理代码都写在对应的processor内,每个processor都在模块启动时绑定了执行的线程池,在这个生命周期内都不发生更改。具体每个模块的processor定义,将在后续的源码解析中给出。

springboot rocketmq 消费者 多线程 拉去分区有序 rocketmq线程模型_线程池_02