Chapter2
- SimpleChannelInboundHandler vs ChannelInboundHandler:
In the client, when
channelRead0()
completes, you have the incoming message and you’re done with it. When the method returns,SimpleChannelInboundHandler
takes care of releasing the memory reference to theByteBuf
that holds the message. ButChannelInboundHandler
doesn’t release the message at the point.
Chapter3
- Channel-sockets EventLoop - Control flow, multithreading, concurrency ChannelFuture - Asynchronous notification
- Channel’s implementation:
- EmbeddedChannel
- LocalServerChannel
- NioDatagramChannel
- NioSctpChannel
- NioSocketChannel
EventLoop
defines Netty’s core abstraction for handling events that occur during the lifetime of a connection.- The relationship between
Channel
,EventLoop
,Thread
, andEventLoopGroup
are:- An
EventLoopGroup
contains one or moreEventLoop
s - An
EventLoop
is bound to a single Thread for its lifetime - All I/O events processed by an
EventLoop
are handled on its dedicatedThread
- A
Channel
is registered for its lifetime with a singleEventLoop
- A single
EventLoop
may be assigned to one or moreChannel
s
- An
- Because all I/O operations in Netty are asynchronous, so we need a way to determine its result at a later time. So Netty provides
ChannelFuture
, whoseaddListener()
method registers aChannelFutureListener
to be notified when an operation has completed. ChannelHandler
serves as the container for all application logic that applies to handling inbound and outbound data. This is possible becauseChannelHandler
methods are triggered by network events.ChannelPipeline
provides a container for a chain ofChannelHandler
s and defines an API for propagating the flow of inbound and outbound events along the chain. When aChannel
is created, it is automatically assigned its ownChannelPipeline
.ChannelHandler
s are installed in theChannelPipeline
as follows:- A
ChannelInitializer
implementation is registered with aServerBootstrap
- When
ChannelInitializer.initChannel()
is called, theChannelInitializer
installs a custom set ofChannelHandler
s in the pipeline. - The
ChannelInitializer
removes itself from theChannelPipeline
- A
- If a message or any other inbound event is read, it will start from the head of the pipeline and be passed to the first
ChannelInboundHandler
. But data flows from the tail through the chain ofChannelOutboundHandlers
until it reaches the head. - There are two ways of sending messages in Netty. You can write directly to the
Channel
or write to aChannelHandlerContext
object associated with aChannelHandler
. The former approach causes the message to start from the tail of theChannelPipeline
, the latter causes the message to start from the next handler in theChannelPipeline
. - Adapters you’ll call most often when creating your custom handlers:
- ChannelHandlerAdapter
- ChannelInboundHandlerAdapter
- ChannelOutboundHandlerAdapter
- ChannelDuplexHandlerAdapter
- Why bootstrapping a client requires only a single
EventLoopGroup
, but aServerBootstrap
requires two(which can be the same instance)? A server needs two distinct sets ofChannel
s. The first set will contain a singleServerChannel
representing the server’s own listening socket, bound to a local port. The second set will contain all theChannel
s that have been created to handle incoming client connections - one for each connection the server has accepted.
Chapter4
- The implementation of
compareTo()
inAbstractChannel
throws anError
if two distinctChannel
instances return the same hash code. - Typical uses for
ChannelHandler
s include:- Transforming data from one format to another
- Providing notification of exceptions
- Providing notification of a
Channel
becoming active or inactive - Providing notification when a
Channel
is registered with or deregistered from anEventLoop
- Providing notification about user-defined events
- Netty’s
Channel
implementations are thread-safe, so you can store a reference to aChannel
and use it whenever you need to write something to the remote peer, even when many threads are in use. - Netty-provided transports:
- NIO: io.netty.channel.socket.nio Uses the
java.nio.channels
package as a foundation - a selector-based approach - Epoll: io.netty.channel.epoll Uses JNI for
epoll()
and non-blocking IO. This transport supports features available only on Linux, such asSO_REUSEPORT
, and is faster than the NIO transport as well as fully non-blocking - OIO: io.netty.channel.socket.oio Uses the
java.net
package as a foundation - uses blocking streams. - Local: io.netty.channel.local A local transport that can be used to communicate in the VM via pipes
- Embedded: io.netty.channel.embedded An embedded transport, which allows using
ChannelHandlers
without a true network-based transport. This can be quite useful for testing your ChannelHandler implementations.
- NIO: io.netty.channel.socket.nio Uses the
Chapter5
- Netty’s API for data handling is exposed through two components - abstract class ByteBuf and interface ByteBufHolder.
These are some of the advantages of the
ByteBuf
API:- It’s extensible for user-defined buffer types
- Transparent zero-copy is achieved by a built-in composite buffer type
- Capacity is expanded on demand
- Switching between reader and writer modes doesn’t require calling ByteBuffer’s flip() method
- Reading and writing employ distinct indices
- Method chaining is supported
- Reference counting is supported
- Pooling is supported
- How
ByteBuf
works? ByteBuf maintains two distinct indices: one for reading and one for writing. When you read fromByteBuf
, itsreaderIndex
is incremented by the number of bytes read. Similarly, when you write toByteBuf
, itswriterIndex
is incremented. ByteBuf
methods whose name begins withread
orwrite
advance the corresponding index, whereas operations that begins withset
orget
do not. The latter methods operate on a relative index that’s passed as an argument to the method.ByteBuf
usage pattern:- Heap buffers: Store the data in the JVM heap as an array.
- DIRECT BUFFER: The performance is better because it avoids copy data from JVM to DIRECT BUFFER. But it is difficult to allocate or release direct buffer.
- COMPOSITE BUFFER: Netty implements this pattern with a subclass of
ByteBuf
,CompositeByteBuf
, which provides a virtual representation of multiple buffers as a single, merged buffer.CompsiteByteBuf
may not allow access to a backing array, so accessing the data in aCompositeByteBuf
resembles the direct buffer pattern.
- The JDK’s
InputStream
defines the methodsmark(int readlimit)
andreset()
. These are used to mark the current position in the stream to a specified value and to reset the stream to that position, respectively. Similarly, you can set and reposition theByteBuf readerIndex
andByteBuf writerIndex
by callingmarkReaderIndex()
,markWriterBuffer()
,resetReaderIndex()
, andresetWriterIndex()
. These are similar to theInputStream
calls, expect that there’s noreadLimit
to specify when the mark becomes invalid. - A
derived buffer
provides a view of aByteBuffer
that represents its contents in a specified way. Such views are cerated by the following methods:- duplicate()
- slice()
- slice(int, int)
- Unpooled.unmodifiableBuffer(…)
- order(ByteOrder)
- readSlice(int)
Each returns a new
ByteBuf
instance with its own reader, writer, and marker indices. The internal storage is shared, so be carefully if you modify its content you are modifying the source instance as well.
ByteBufHolder
is a good choice if you want to implement a message object that stores its payload in aByteBuf
.- You can obtain a reference to a
ByteBufAllocator
either from aChannel
or through theChannelHandlerContext
that is bound to aChannelHandler
. The following listing illustrates both of these methods. - Netty provides two implementations of
ByteBufAllocator
:PooledByteBufAllocator
andUnpooledByteBufAllocator
. The former poolsByteBuf
instances to improve performance and minimum memory fragmentation. This implementation uses an efficient approach to memory allocation known asjemalloc
that has been adopted by a number of modern OSes. The latter implementation doesn’t poolByteBuf
instances and returns a new instance everytime it is called.
Chapter6
- Channel lifecycle states:
- ChannelUnregistered: The Channel was created, but isn’t registered to an
EventLoop
- ChannelRegistered: The channel is registered to an
EventLoop
- ChannelActive: The Channel is active(connected to its remote peer). It’s now possible to receive and send data.
- ChannelInactive: The Channel is not connected to the remote peer
- ChannelUnregistered: The Channel was created, but isn’t registered to an
- ChannelHandler lifecycle methods:
- handlerAdded: Called when a ChannelHandler is added to a ChannelPipeline
- handlerRemoved: Called when a ChannelHandler is removed from a ChannelPipeline
- exceptionCaught: Called if an error occurs in the ChannelPipeline during processing
- ChannelHandler’s subinterface:
- ChannelInboundHandler
- ChannelOutboundHandler
- ChannelInboundHandler methods:
- channelRegistered
- channelUnregistered
- channelActive
- channelInactive
- channelReadComplete
- channelRead
- channelWritabilityChanged: Invoked when the writablity state of the Channel changes. The user can ensure writes are not done too quickly or can resume writes when the Channel becomes writable again. The Channel method isWritable() can be called to detect the writability of the channel. The threshold for writability can be set via
Channel.config().setWriteHighWaterMark()
andChannel.config().setWriteLowWaterMark()
- userEventTriggered: Invoked when ChannelInboundHandler.fireUserEventTriggered() is called because a POJO was passed through the ChannelPipeline.
- ChannelOutboundHandler:
- bind(ChannelHandlerContext, SocketAddress, ChannelPromise)
- connect(ChannelHandlerContext, SocketAddress, SocketAddress, ChannelPromise)
- disconnect(ChannelHandlerContext, ChannelPromise)
- close(ChannelHandlerContext, ChannelPromise)
- deregister(ChannelHandlerContext, ChannelPromise)
- read(ChannelHandlerContext)
- write(ChannelHandlerContext, Object, ChannelPromise)
- flush(ChannelHandlerContext)
- To assist you in diagnosing potential problems, Netty provides
ResourceLeakDetector
, which will sample about 1% of your application’s buffer allocations to check for memory leaks. The overhead involved is very small. - Leak-detection levels:
- DISABLED: Disables leak detection. Use this only after extensive testing
- SIMPLE: Reports any leaks found using the default sampling rate of 1%. This is the default level and is a good fit for most cases.
- ADVANCED: Reports leaks found and where the message was accessed. Uses the default sampling rate.
- PARANOID: Like ADVANCED except that every access is sampled. This has a heavy impact on performance and should be used only in the debugging phase. The leak-detection level is defined by this one: java -Dio.netty.leakDetectionLevel=ADVANCED
- Every new
Channel
that’s created is assigned a newChannelPipeline
. This association is permanent; the Channel can neither attach anotherChannelPipeline
nor detach the current one. This is a fixed operation in Netty’s component lifecycle and requires no action on the part of the developer. - The
ChannelHandlerContext
associated with aChannelHandler
never changes, so it’s safe to cache a reference to it.ChannelHandlerContext
methods, involve a shorter event flow than do the identically named methods available on other classes. This should be exploited where possible to provide maximum performance. - Use
@Sharable
only if you’re certain that your ChannelHandler is thread-safe. - Because the exception will continue to flow in the inbound direction, the
ChannelInboundHandler
that implements the preceding logic is usually placed last in theChannelPipeline
. This ensures that all inbound exceptions are always handled, wherever in theChannelPipeline
they may occur.
Chapter7
- I/O operations in Netty3:
The threading model used in previous releases guaranteed only that inbound events would be executed in the so-called I/O thread. All outbound events were handled by the calling thread, which might be the I/O thread or any other. This seemed a good idea at first but was found to be problematical because of the need for careful synchronization of outbound events in ChannelHandlers. In shorter, it wasn’t possible to guarantee that multiple thread wouldn’t try to access an outbound event at the same time. This could happen, for example, if you fired simultaneous downstream events for the same Channel by calling Channel.write() in different threads.
The threading model adopted in Netty4 resolves these problems by handling everything that occurs in a given
EventLoop
in the same thread. This provides a simpler execution architecture and eliminates the need for synchronization in theChannelHandler
s. - The
EventLoop
s that service I/O and events forChannel
s are contained in anEventLoopGroup
. The manner in whichEventLoop
s are created and assigned varies according to the transport implementation.- Asynchronous transports: Asynchronous implementations use only a few
EventLoop
s (and their associated Threads), and in the current model these may be shared amongChannel
s. This allows manyChannel
s to be served by the smallest possible number ofThread
s, rather than assigning aThread
perChannel
. Be aware of the implications ofEventLoop
allocation forThreadLocal
use. Because anEventLoop
usually powers more than oneChannel
,ThreadLocal
will be the same for all associatedChannel
s. This makes it a poor choice for implementing a function such as state tracking. However, in a stateless context it can still be useful for sharing heavy or expensive objects, or even events, amongChannel
s. - Blocking transports: One
EventLoop
(and its Thread) is assigned to eachChannel
. You may have encountered this pattern if you’ve developed applications that use the blocking I/O implementation in thejava.io
package.
- Asynchronous transports: Asynchronous implementations use only a few
Chapter8
- The differences between
handler()
andchildHandler()
is that the former adds a handler that’s processed by the acceptingServerChannel
, whereaschildHandler()
adds a handler that’s processed by an acceptedChannel
, which represents a socket bound to a remote peer. - Reuse
EventLoop
s wherever possible to reduce the cost of thread creation.
Chapter11
- Provided ChannelHandlers and codec:
- SslHandler
- HTTP decoders and encoders:
- HttpRequestEncoder: Encodes
HttpRequest
,HttpContent
, andLastHttpContent
messages to bytes - HttpResponseEncoder: Encodes
HttpResponse
,HttpContent
, andLastHttpContent
messages to bytes - HttpRequestDecoder: Decodes
bytes
intoHttpRequest
,HttpContent
, andLastHttpContent
message. - HttpResponseDecoder: Decodes
bytes
intoHttpResponse
,HttpContent
, andLastHttpContent
message - HttpClientCodec: Package
HttpRequestEncoder
andHttpResponseDecoder
- HttpServerCodec: Package
HttpRequestDecoder
andHttpResponseEncoder
- HttpObjectAggregator: Aggregate multiple HTTPObject intoFullHttpRequest
andFullHttpResponse
- HttpContentCompressor: Compress HTTP content, supportgzip
anddeflate
now - HttpContentDecompressor: Decompress HTTP content - IdleStateHandler: Fires anIdleStateEvent
if the connection idle too long. You can then handle theIdleStateEvent
by overridinguserEventTriggered()
in yourChannelInboundHandler
. - ReadTimeoutHandler: Throws aReadTimeoutException
and closes theChannel
when no inbound data is received for a specified interval. TheReadTimeoutException
can be detected by overridingexceptionCaught()
in yourChannelHandler
- WriteTimeoutHandler: Throws aWriteTimeoutException
and closes theChannel
when no inbound data is received for a specified interval. TheWriteTimeoutException
can be detected by overridingexceptionCaught()
in yourChannelHandler
. - DelimiterBasedFrameDecoder: A generic decoder that extracts frames using any user-provided delimiter - LineBasedFrameDecoder: A decoder that extracts frames delimited by the line-endings ` ` or `. This decoder is faster than
DelimiterBasedFrameDecoder`
- ChunkedInput implementations:
- ChunkedFile: Fetches data from a file chunked by chunk, for use when your platform doesn’t support zero-copy or you need to transform the data
- ChunkedNioFile: Similar to
ChunkedFile
expect that it usesFileChannel
- ChunkedStream: Transfers content chunk by chunk from an
InputStream
- ChunkedNioStream: Transfers content chunk by chunk from a
ReadableByteChannel
- To use your own
ChunkedInput
implementation install aChunkedWriteHandler
in the pipeline. UseChunkedWriteHandler
to write large data without riskingOutOfMemoryErrors
. - JDK serialization codecs:
- CompatibleObjectDecoder: Decoder for interoperating with non-Netty peers that use JDK serialization
- CompatibleObjectEncoder: Encoder for interoperating with non-Netty peers that use JDK serialization
- ObjectDecoder: Decoder that uses custom serialization for decoding on top of JDK serialization; it provides a speed improvement when external dependencies are excluded. Otherwise the other serialization implementations are preferable.
- ObjectEncoder: Encoder that uses custom serialization for decoding on top of JDK serialization; it provides a speed improvement when external dependencies are excluded. Otherwise the other serialization implementations are preferable.
- JBoss marshalling. If you are free to make use of external dependencies, JBoss marshalling is ideal: It’s up to 3 times faster than JDK serialization and more compact.
- JBoss marshalling codecs:
- CompatibleMarshallingDecoder: For compatibility with peers that use JDK serialization
- CompatibleMarshallingEncoder
- MarshallingDecoder: For use with peers that use JBoss Marshalling. These classes must be used together.
- MarshallingEncoder
- Protobuf codec:
- ProtobufDecoder: Decodes a message using Protobuf
- ProtobufEncoder: Encodes a message using Protobuf
- ProtobufVarint32FrameDecoder: Splits received
ByteBuf
s dynamically by the value of the Google Protocol “Bse 128 Varints” integer length field in the message