此文章是基于Netty4.1,一般在使用Netty做服务端开发时,通常会定义I/O线程池及业务线程池。I/O线程池顾名思义用于处理网络连接及维护Channel的相关事件(一般像心跳及编解码都可以使用I/O线程池)。当需要处理比较耗时的业务逻辑也共用I/O线程池话会对整个服务的吞吐量有比较大的影响(曾经遇到过)。所以在生产环境中建议定义业务线程池。下面说说如何使用业务线程池及业务线程池处理逻辑的原理。

下面是一个Netty服务端初始化的简单例子:

 

public class NettyServer {
    public static void main(String[] args) throws Exception {
        new NettyServer().start("127.0.0.1", 8081);
    }

    public void start(String host, int port) throws Exception {
        ExecutorService executorService = Executors.newCachedThreadPool();
        EventLoopGroup bossGroup = new NioEventLoopGroup(0, executorService);//Boss I/O线程池,用于处理客户端连接,连接建立之后交给work I/O处理
        EventLoopGroup workerGroup = new NioEventLoopGroup(0, executorService);//Work I/O线程池
        EventExecutorGroup businessGroup = new DefaultEventExecutorGroup(2);//业务线程池
        ServerBootstrap server = new ServerBootstrap();//启动类
        server.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
                .option(ChannelOption.SO_BACKLOG, 1024).childHandler(new ChannelInitializer<SocketChannel>() {
            @Override
            protected void initChannel(SocketChannel ch) throws Exception {
                ChannelPipeline pipeline = ch.pipeline();
                pipeline.addLast("idleStateHandler", new IdleStateHandler(0, 0, 3));
                pipeline.addLast(businessGroup, new ServerHandler());
            }
        });
        server.childOption(ChannelOption.TCP_NODELAY, true);
        server.childOption(ChannelOption.SO_RCVBUF, 32 * 1024);
        server.childOption(ChannelOption.SO_SNDBUF, 32 * 1024);
        InetSocketAddress addr = new InetSocketAddress(host, port);
        server.bind(addr).sync().channel();//重启服务
    }
}

此文章主要是介绍对业务线程池的使用,其他Netty相关知识就不再说明。例子中initChannel()表示初始化一个Channel,并向Channel的Pipeline中添加处理的逻辑的Handler形成一个处理链,其中我们对ServerHandler这个处理器使用了一个业务线程池。下面看addList()的逻辑:

 

@Override
public final ChannelPipeline addLast(EventExecutorGroup executor, ChannelHandler... handlers) {
    if (handlers == null) {
        throw new NullPointerException("handlers");
    }
    for (ChannelHandler h: handlers) {
        if (h == null) {
            break;
        }
        addLast(executor, null, h);
    }
    return this;
}

@Override
public final ChannelPipeline addLast(EventExecutorGroup group, String name, ChannelHandler handler) {
    final AbstractChannelHandlerContext newCtx;
    synchronized (this) {
        checkMultiplicity(handler);
        newCtx = newContext(group, filterName(name, handler), handler);
        addLast0(newCtx);
        // If the registered is false it means that the channel was not registered on an eventLoop yet.
        // In this case we add the context to the pipeline and add a task that will call
        // ChannelHandler.handlerAdded(...) once the channel is registered.
        if (!registered) {
            newCtx.setAddPending();
            callHandlerCallbackLater(newCtx, true);
            return this;
        }
        EventExecutor executor = newCtx.executor();
        if (!executor.inEventLoop()) {
            callHandlerAddedInEventLoop(newCtx, executor);
            return this;
        }
    }
    callHandlerAdded0(newCtx);
    return this;
}

addList方法是将Handler包装成一个 AbstractChannelHandlerContext(链表结构)然后添加到处理链之中,其中线程分配是在newContext()方法中实现的。下面重点来了,

private AbstractChannelHandlerContext newContext(EventExecutorGroup group, String name, ChannelHandler handler) {
    return new DefaultChannelHandlerContext(this, childExecutor(group), name, handler);
}

private EventExecutor childExecutor(EventExecutorGroup group) {
    if (group == null) {
        return null;
    }
    Boolean pinEventExecutor = channel.config().getOption(ChannelOption.SINGLE_EVENTEXECUTOR_PER_GROUP);
    //是否每个事件分组一个单线程的事件执行器  
    if (pinEventExecutor != null && !pinEventExecutor) {
        return group.next();
    }
    Map<EventExecutorGroup, EventExecutor> childExecutors = this.childExecutors;
    if (childExecutors == null) {
        // Use size of 4 as most people only use one extra EventExecutor.
        childExecutors = this.childExecutors = new IdentityHashMap<EventExecutorGroup, EventExecutor>(4);
    }
    // Pin one of the child executors once and remember it so that the same child executor
    // is used to fire events for the same channel.
    EventExecutor childExecutor = childExecutors.get(group);
    if (childExecutor == null) {
        childExecutor = group.next();
        childExecutors.put(group, childExecutor);
    }
    return childExecutor;
}

上面的childExecutor(group)表示从group分配一个EventExecutor给这个Handler来处理业务,group就是在初始化传进来的businessGroup,childExecutor()先会判断是否需要为每个事件处理器handler分配一个执行器,一般默认为true,false表示如果两个处理器(Handler)使用同一个group那么可能会被分配同一个EventExecuto。然后会为这个group分配一个子的执行器集合。然后从group中拿一个执行器放到这个集合中。其中group.next表示从EventExecutorGroup随机拿一个执行器childExecutor。接下来看EventExecutor如何处理任务的。

上面说的EventExecutor一般是DefaultEventLoop extends SingleThreadEventLoop,在DefaultEventLoop有如下:

 

@Override
protected void run() {
    for (;;) {
        Runnable task = takeTask();
        if (task != null) {
            task.run();
            updateLastExecutionTime();
        }

        if (confirmShutdown()) {
            break;
        }
    }
}

上面可以看出DefaultEventLoop起了一个循环任务,一直都获取任务执行,这个taskTask()方法在其父类中定义:

protected Runnable takeTask() {
    assert inEventLoop();
    if (!(taskQueue instanceof BlockingQueue)) {
        throw new UnsupportedOperationException();
    }

    BlockingQueue<Runnable> taskQueue = (BlockingQueue<Runnable>) this.taskQueue;
    for (;;) {
        ScheduledFutureTask<?> scheduledTask = peekScheduledTask();
        if (scheduledTask == null) {
            Runnable task = null;
            try {
                task = taskQueue.take();
                if (task == WAKEUP_TASK) {
                    task = null;
                }
            } catch (InterruptedException e) {
                // Ignore
            }
            return task;
        } else {
            long delayNanos = scheduledTask.delayNanos();
            Runnable task = null;
            if (delayNanos > 0) {
                try {
                    task = taskQueue.poll(delayNanos, TimeUnit.NANOSECONDS);
                } catch (InterruptedException e) {
                    // Waken up.
                    return null;
                }
            }
            if (task == null) {
                // We need to fetch the scheduled tasks now as otherwise there may be a chance that
                // scheduled tasks are never executed if there is always one task in the taskQueue.
                // This is for example true for the read task of OIO Transport
                // See https://github.com/netty/netty/issues/1614
                fetchFromScheduledTaskQueue();
                task = taskQueue.poll();
            }

            if (task != null) {
                return task;
            }
        }
    }
}

 

上面代码表示从一个队列中的获取任务。当channel中触发一个ServerHandler事件时,会将这个事件封装成一个task放到BlockingQueue这个阻塞队列中,等待这个执行器去执行。

 

总结:

1、在使用业务线程池的时候同一个Channel的同一个处理器Handler使用的是同一个EventExecutor,也可理解是单线程在执行同一个处理器的任务。

2、Handler任务是通过BlockingQueue来解藕且只有一个线程在处理同一个Handler的任务,所以同一个Channel的同一个处理器的任务执行是有序的,从而可以兼容Netty3中的OrderedMemoryAwareThreadPoolExecutor的有序性

3、在处理业务逻辑的尽量不要使用I/O线程,这样会影响服务有吞吐量。(之前用Netty实现http接口,没有定义线程池然后表现是应用内部处理很快,但是调用方就是超时。就是I/O线程线时间被占用,导致请求一直在等待连接,从而调用方超时)

4、业务线程池要用EventExecutorGroup,EventLoopGroup这个是给I/O线程使用,里面有一些处理网络的方法。