DEV Community

Andrew May for Leading EDJE

Posted on • Edited on • Originally published at leadingedje.com

Java NIO and Netty

This post was originally published on the Leading EDJE website in October 2014.

The java.nio package was added to the Java Development Kit in version 1.4 in 2002. I remember reading about it at the time, finding it both interesting and a little intimidating, and went on to largely ignore the package for the next 12 years. Tomcat 6 was released at the end of 2006 and contained an NIO connector, but with little or no advice about when you might want to use it in preference to the default HTTP connector, I shied away from using it.

So what is NIO anyway? It appears that it officially stands for "New Input/Output," but the functionality added in Java 1.4 was primarily focused on Non-blocking Input/Output and that's what we're interested in.

In Java 1.7, NIO.2 was added containing the java.nio.file package that tries to replace parts of java.io and the "New" monikor makes more sense, but NIO.2 has little to do with what was added in NIO. So it's another Java naming triumph.

The traditional I/O APIs (e.g., InputStream/OutputStream) block the current thread when reading or writing, and if they're slow or perhaps blocked on the other end then many threads can end up unable to proceed - this is how your web application grinds to a halt when you have a database deadlock and all 100 connections in your connection pool are allocated. Each thread can only support a single stream of communication and can't do anything else while waiting.

For a servlet container like Tomcat, this traditional blocking I/O model requires a separate thread for each concurrent client connection, and if you have a large number of users, or the connections use HTTP keep alive, this can consume a large number of threads on the server. Threads consume memory (each thread has a stack), may be limited by the OS (e.g., ulimit on Linux) and there is generally some overhead in context switching between threads especially if running on a small number of CPU cores.

I still find the Non-blocking I/O support in the JDK to be somewhat intimidating, which is why it's fortunate that we have frameworks like Netty where someone else has already done the hard work for us. I recently used Netty to build a server that communicates with thousands of concurrently connected clients using a custom binary protocol. Out of the box Netty also has support for common protocols such as HTTP and Google Protobuf, but it makes it easy to build custom protocols as well.

Netty Image

At its core is the concept of a Channel and its associated ChannelPipeline. The pipeline is built up of a number of ChannelHandlers that may handle inbound and/or outbound messages. The handlers have great flexibility in what they do with the messages, and how you arrange your pipeline is also up to you. You may also dynamically rearrange the pipeline based upon the messages you receive. It's similar in some ways to Servlet Filters but a lot more dynamic and flexible.

Netty manages a pool of threads in an EventLoopGroup that has a default size of twice the number of available CPU cores. When a connection is made and a channel created, it is associated with one of these threads. Each time a new message is received or sent for this channel it will use the same thread. To use Netty efficiently you should not perform any blocking I/O (e.g., JDBC) within one of these threads. You can create separate EventLookGroups for I/O bound processing or use standard Java utilities for running tasks in separate threads.

The API assumes asynchronicity; for example writing a message returns a ChannelFuture. This is similar to a java.util.concurrent.Future, but with extra functionality including the ability to add a listener that will be called when the future completes.



channel.writeAndFlush(message).addListener(new ChannelFutureListener() {
    public void operationComplete(ChannelFuture future) throws Exception {
        if(future.isSuccess()) {
            logger.debug("Success");
        } else {
            logger.error("Failure",future.cause());
        }
    }
});


Enter fullscreen mode Exit fullscreen mode

Netty is under active development and in use at a number of large companies most notably Twitter. There's a Netty in Action book that is a good supplement to the useful but fairly brief documentation. I've found it a pleasure to use and would recommend it for projects that require large numbers of concurrent connections.

Smart EDJE Image

Top comments (1)

Collapse
 
ssanduri profile image
ssanduri

Hi Andrew, thanks for the article. I have couple of follow up questions on Netty web server thread pool.
You mentioned that to perform blocking operation, a new worker thread pool is created. What will be the thread pool size of the worker group? Shouldn't it be large enough (say, 200) to handle concurrent clients requests since each worker thread is blocked serving a client request? In that case, won't these large number of worker threads contend with main acceptor event loop threads (which are 2*CPUs) making main event loop slow?