本篇博客分析Java中BIO NIO AIO和Netty模型
先来理解几个概念,同步与异步,阻塞与非阻塞
依然是烧水的例子,A去烧水,等待水开过程中A什么都不做,只等待,那就是阻塞;可以干别的,那就是非阻塞;水开之后,是否还是A来操作?依然是A操作,同步;换了B来操作,异步。
1.BIO Blocking-IO
BIO是阻塞的,它的阻塞体现在很多地方 例如下面的accept
public class BIOServer {
public static void main(String[] args) {
try {
ServerSocket ss = new ServerSocket();
ss.bind(new InetSocketAddress("127.0.0.1", 8000));
System.out.println("Socket Server has been established now at 127.0.0.1:8000");
while (true) {
// BIO 的accept是阻塞的,有socket连接过来,才会往下执行,没有就会阻塞在这
Socket accept = ss.accept();
// 建立连接之后 BIO的读写也是阻塞的,如果我前一个socket 在读写时阻塞,此刻server不接受其他的连接
// 我注释掉的单线程下io阻塞,整个server阻塞
//int len = socket.getInputStream().read(bytes);
// socket.getOutputStream().write(bytes, 0, len);
//socket.getOutputStream().flush();
/** 在这种模型下做了改造,把具体的io读写交给其他线程来执行,保证能够接受连接
* 这里我直接开启了新线程,之后可以改造成线程池来保护server不被吃满
/*
new Thread(() -> {
handle(accept);
}).start();
}
} catch (IOException e) {
e.printStackTrace();
}
}
private static void handle(Socket socket) {
byte[] bytes = new byte[1024];
try {
int len = socket.getInputStream().read(bytes);
socket.getOutputStream().write(bytes, 0, len);
socket.getOutputStream().flush();
} catch (IOException e) {
e.printStackTrace();
try {
socket.close();
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
}
BIO可以理解成基于线程驱动的模型,BIO本身就是阻塞的,但是可以通过线程和线程池来优化,BIO的读写是byte[]
2 . NIO Non-Blocking IO
NIO是一种非阻塞的IO,他在处理socket连接和io读写采用的是轮询,最外层的线程加了slector选择器,通过不断的轮询被选择的key,来完成建立连接和读写
上代码
public class NIOServer {
public static void main(String[] args) {
try {
ServerSocketChannel ssc = ServerSocketChannel.open();
ssc.socket().bind(new InetSocketAddress("127.0.0.1", 8000));
// 这里设置非阻塞
ssc.configureBlocking(false);
System.out.println("Socket server in single pool mode started,listening on:" + ssc.getLocalAddress());
// 开启选择器
Selector selector = Selector.open();
// 给选择器注册OP_ACCEPT,用来选择建立请求的操作
ssc.register(selector, SelectionKey.OP_ACCEPT);
// while循环
while (true) {
selector.select(); //select这里是阻塞的,while不会一直执行,当有OP请求时,才会执行
// 获取所有的操作KEY
Set<SelectionKey> keys = selector.selectedKeys();
Iterator<SelectionKey> iterator = keys.iterator();
while (iterator.hasNext()) {
SelectionKey key = iterator.next();
// 移除key,防止重复消费
iterator.remove();
// 操作处理逻辑
handle(key);
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
private static void handle(SelectionKey key) {
if (key.isAcceptable()) {
// 判断key是accept,注册一个read key
ServerSocketChannel ssc = (ServerSocketChannel) key.channel();
try {
SocketChannel sc = ssc.accept();
sc.configureBlocking(false);
sc.register(key.selector(), SelectionKey.OP_READ);
} catch (IOException e) {
e.printStackTrace();
}
} else if (key.isReadable()) {
// 判断时IO的写操作,执行写的处理逻辑
SocketChannel sc = (SocketChannel) key.channel();
ByteBuffer buffer = ByteBuffer.allocate(1024);
buffer.clear();
try {
int len = sc.read(buffer);
String content = len != -1 ? new String(buffer.array(), 0, len) : "";
ByteBuffer writeBuffer = ByteBuffer.wrap(("Response from server:" + content).getBytes());
sc.write(writeBuffer);
} catch (IOException e) {
e.printStackTrace();
} finally {
if (sc != null) {
try {
sc.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
}
上面是单线程模式下的NIO,一个selector不仅要管理连接的建立,还要管理IO的读写操作,压力很大,于是我把它改成了多线程来处理读写,主线程只管建立连接
private void handle(SelectionKey key) throws IOException {
if (key.isAcceptable()) {
ServerSocketChannel ssc = (ServerSocketChannel) key.channel();
SocketChannel sc = ssc.accept();
sc.configureBlocking(false);
sc.register(selector, SelectionKey.OP_READ);
} else if (key.isReadable()) {
// IO读写给到Threadhandle的run()去做
threadPool.execute(new Threadhandle(key));
}
}
NIO跟BIO不同,他是非阻塞的,同时他的IO读写是通过ByteBuffer来处理的
3. AIO Asynchronous-IO
AIO 是异步非阻塞的,他是采用事件响应来实现异步,可以理解成回调函数,钩子,触发器句柄
public class AIOServer {
public static void main(String[] args) throws IOException {
AsynchronousServerSocketChannel assc = AsynchronousServerSocketChannel.open().bind(
new InetSocketAddress("127.0.0.1", 8000));
// 跟NIO的while(true)不同,AIO不会一直去遍历,他会把accept的操作挂在处理器类上,重写里面的completed方法
//和failed方法
assc.accept(null, new CompletionHandler<AsynchronousSocketChannel, Object>() {
@Override
public void completed(AsynchronousSocketChannel client, Object attachment) {
// 建立连接成功之后的回调函数
assc.accept(null, this);
ByteBuffer buffer = ByteBuffer.allocate(1024);
// 给read操作绑定回调函数
client.read(buffer, buffer, new CompletionHandler<Integer, ByteBuffer>() {
@Override
public void completed(Integer result, ByteBuffer attachment) {
attachment.flip();
String response = new String(attachment.array(), 0, result);
client.write(ByteBuffer.wrap(response.getBytes()));
}
@Override
public void failed(Throwable exc, ByteBuffer attachment) {
exc.printStackTrace();
}
});
}
@Override
public void failed(Throwable exc, Object attachment) {
exc.printStackTrace();
}
});
}
}
当然AIO也有多线程的模型,用起来也很方便,初始化的时候传一个线程池
ExecutorService executorService = Executors.newCachedThreadPool();
AsynchronousChannelGroup threadGroup = AsynchronousChannelGroup.withCachedThreadPool(executorService, 1);
AsynchronousServerSocketChannel assc = AsynchronousServerSocketChannel.open(threadGroup).bind(
new InetSocketAddress("127.0.0.1", 8000));
我理解的AIO,很好的诠释了Reactor,我整个过程都是异步的,我把请求要执行的函数地址告诉OS,让OS遇到请求时,通过函数地址去执行方法,AIO的IO读写依然是ByteBuffer
4. Netty
下面说最火的Netty 是如何做的
Netty在现在主流的服务器用来做机器通信,tomcat apache,rpc框架也用;Netty实际上跟AIO很类似,但更友好;他封装的是NIO,而不是AIO,因为NIO和AIO的底层都是使用的epoll模型(这个之后博客我打算分享给大家),对应linux底层的IO模型都是轮询,而AIO本身就是对NIO的封装,考虑到这些,Netty在NIO的基础上封装
public class NettyServer {
private int port = 8000;
public NettyServer(int port) {
this.port = port;
}
public void serverStart() {
// Netty封装NIO,把之前的Selector也变成了线程池操作,可以理解成处理连接的线程池
NioEventLoopGroup mastser = new NioEventLoopGroup();
// 处理IO读写操作线程池
NioEventLoopGroup worker = new NioEventLoopGroup();
// 配置启动项
ServerBootstrap bootstrap = new ServerBootstrap();
// 填充启动参数 配置master和worker,channel类型,初始化的执行类
bootstrap.group(mastser, worker).channel(NioServerSocketChannel.class).childHandler(new ChannelInitializer<SocketChannel>() {
@Override
// 重写初始化channel方法
protected void initChannel(SocketChannel ch) throws Exception {
// 给通道加处理类
ch.pipeline().addLast(new Handler());
}
});
try {
// 注意,Netty的 建立连接和IO读写都是异步,不理解的可以看看ChannelFuture 类,这里不展开说明
ChannelFuture future = bootstrap.bind(port).sync();
future.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
}finally {
mastser.shutdownGracefully();
worker.shutdownGracefully();
}
}
// 这个类是请求的处理类
class Handler extends ChannelInboundHandlerAdapter {
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
// 这里Netty封装的很好,给了一个context,给了一个object的 msg,可以直接对这两个参数操作,来处理读写
System.out.println("read action");
ByteBuf buf = (ByteBuf) msg;
System.out.println(buf.toString());
ctx.writeAndFlush("success received");
ctx.close();
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
// super.exceptionCaught(ctx, cause);
cause.printStackTrace();
ctx.close();
}
}
public static void main(String[] args) {
NettyServer server = new NettyServer(8000);
server.serverStart();
}
}
Netty 是异步非阻塞的,响应式的处理,他会给每一个操作挂载具体的方法实现,在接收到请求时,自动执行,我认为的Netty收到欢迎很大是因为他的代码书写特别清晰,每一步的目的很强,还有Netty不用ByteBuffer了,不用被flip()支配了,下面挂一个netty的client调用的代码
public class NettyClient {
public static void main(String[] args) {
new NettyClient().clientStart();
}
public void clientStart() {
NioEventLoopGroup worker = new NioEventLoopGroup();
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(worker)
.channel(NioSocketChannel.class)
.handler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
System.out.println("channel established");
ch.pipeline().addLast(new ClienHandler());
}
});
}
class ClienHandler extends ChannelInboundHandlerAdapter {
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
ChannelFuture future = ctx.writeAndFlush(Unpooled.copiedBuffer("message".getBytes()));
future.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
System.out.println("send messages succeed");
}
});
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
try {
ByteBuf message = (ByteBuf) msg;
System.out.println(message);
} finally {
ReferenceCountUtil.release(msg);
}
}
}
}
配置之类的和server类似,但他只需要worker就可以;有趣的是,他给pipline 添加的处理类ChannelInboundHandlerAdapter 中,有个channel建立后的处理,里面有个future.addListener,定义了操作完成后的回调,设计的真好!
本篇介绍到这,本来想画几个模型图,以后再说吧