I\'m working on some SocketChannel
-to-SocketChannel
code which will do best with a direct byte buffer--long lived and large (tens to hundreds of me
There are many reasons why this could happen. Without code and/or more details about the data, we can only guess what is happening.
Some Guesses:
-Xmx
and -Xms
parametersfirst I am a bit surprised it's not common knowledge but bear it w/ me
Direct byte buffers allocate an address outside the java heap.
This is utmost importance: all OS (and native C) functions can utilize that address w/o locking the object on the heap and copying the data. Short example on copying: in order to send any data via Socket.getOutputStream().write(byte[]) the native code has to "lock" the byte[], copy it outside java heap and then call the OS function, e.g. send. The copy is performed either on the stack (for smaller byte[]) or via malloc/free for larger ones. DatagramSockets are no different and they also copy - except they are limited to 64KB and allocated on the stack which can even kill the process if the thread stack is not large enough or deep in recursion. note: locking prevents JVM/GC to move/reallocate the object around the heap
So w/ the introduction of NIO the idea was avoid the copy and multitudes of stream pipelining/indirection. Often there are 3-4 buffered type of streams before the data reaches its destination. (yay Poland equalizes(!) with a beautiful shot)
By introducing the direct buffers java could communicate straight to C native code w/o any locking/copy necessary. Hence sent
function can take the address of the buffer add the position and the performance is much the same as native C.
That's about the direct buffer.
The main issue w/ direct buffers - they are expensive to allocate and expensive to deallocate and quite cumbersome to use, nothing like byte[].
Non-direct buffer do not offer the true essence the direct buffers do - i.e. direct bridge to the native/OS instead they are light-weighted and share exactly the same API - and even more, they can wrap byte[]
and even their backing array is available for direct manipulation - what not to love? Well they have to be copied!
So how does Sun/Oracle handles non-direct buffers as the OS/native can't use 'em - well, naively. When a non-direct buffer is used a direct counter part has to be created. The implementation is smart enough to use ThreadLocal
and cache a few direct buffers via SoftReference
* to avoid the hefty cost of creation. The naive part comes when copying them - it attempts to copy the entire buffer (remaining()
) each time.
Now imagine: 512 KB non-direct buffer going to 64 KB socket buffer, the socket buffer won't take more than its size. So the 1st time 512 KB will be copied from non-direct to thread-local-direct, but only 64 KB of which will be used. The next time 512-64 KB will be copied but only 64 KB used, and the third time 512-64*2 KB will be copied but only 64 KB will be used, and so on... and that's optimistic that always the socket buffer will be empty entirely. So you are not only copying n
KB in total, but n
× n
÷ m
(n
= 512, m
= 16 (the average space the socket buffer has left)).
The copying part is a common/abstract path to all non-direct buffer, so the implementation never knows the target capacity. Copying trashes the caches and what not, reduces the memory bandwidth, etc.
*A note on SoftReference caching: it depends on the GC implementation and the experience can vary. Sun's GC uses the free heap memory to determine the lifespan of the SoftRefences which leads to some awkward behavior when they are freed - the application needs to allocated the previously cached objects again- i.e. more allocation (direct ByteBuffers take minor part in the heap, so at least they do not affect the extra cache trashing but get affected instead)
My rule of the thumb - a pooled direct buffer sized with the socket read/write buffer. The OS never copies more than necessary.
This micro-benchmark is mostly memory throughput test, the OS will have the file entirely in cache, so it mostly tests memcpy
. Once the buffers run out of the L2 cache the drop of performance is to be noticeable. Also running the benchmark like that imposes increasing and accumulated GC collection costs. (rest()
will not collect the soft-referenced ByteBuffers)
I suspect that these knees are due to tripping across a CPU cache boundary. The "non-direct" buffer read()/write() implementation "cache misses" earlier due to the additional memory buffer copy compared to the "direct" buffer read()/write() implementation.
I wonder if the thread local allocation buffer (TLAB) during the test is around 256K. Use of TLABs optimizes allocations from the heap so that the non-direct allocations of <=256K are fast.
What is commonly done is to give each thread a buffer that is used exclusively by that thread to do allocations. You have to use some synchronization to allocate the buffer from the heap, but after that the thread can allocate from the buffer without synchronization. In the hotspot JVM we refer to these as thread local allocation buffers (TLAB's). They work well.
If my hypothesis about a 256K TLAB is correct, then information later in the the article suggests that perhaps the >256K allocations for the larger non-direct buffers bypass the TLAB. These allocations go straight to heap, requiring thread synchronization, thus incurring the performance hits.
An allocation that can not be made from a TLAB does not always mean that the thread has to get a new TLAB. Depending on the size of the allocation and the unused space remaining in the TLAB, the VM could decide to just do the allocation from the heap. That allocation from the heap would require synchronization but so would getting a new TLAB. If the allocation was considered large (some significant fraction of the current TLAB size), the allocation would always be done out of the heap. This cut down on wastage and gracefully handled the much-larger-than-average allocation.
This hypothesis could be tested using information from a later article indicating how to tweak the TLAB and get diagnostic info:
To experiment with a specific TLAB size, two -XX flags need to be set, one to define the initial size, and one to disable the resizing:
-XX:TLABSize= -XX:-ResizeTLAB
The minimum size of a tlab is set with -XX:MinTLABSize which defaults to 2K bytes. The maximum size is the maximum size of an integer Java array, which is used to fill the unallocated portion of a TLAB when a GC scavenge occurs.
Diagnostic Printing Options
-XX:+PrintTLAB
Prints at each scavenge one line for each thread (starts with "TLAB: gc thread: " without the "'s) and one summary line.