问题
I get the following code snippet:
datagramChannel = DatagramChannel
.open(StandardProtocolFamily.INET).setOption(StandardSocketOptions.SO_REUSEADDR, true)
.setOption(StandardSocketOptions.IP_MULTICAST_IF, networkInterface);
datagramChannel.configureBlocking(true);
datagramChannel.bind(new InetSocketAddress(filter.getType()
.getPort(filter.getTimeFrameType())));
datagramChannel.join(group, networkInterface);
datagramChannel.receive(buffer);
This code is located in a Callable
and I create up to 12 Callable
s (12 threads thus) to retrieve multicast packets with different data from 12 different ports. It only reads from the information which is broacasted on the network each 3-8 seconds.
When pooling the 12 ports continuously (wait for the information, get the information, and so on), it eats 100% of one of my CPU.
Profiling the execution with JVisualVM, I see that 90% of the execution time is devoted to java.nio.channels.DatagramChannel#receive()
, and more precisely com.sun.nio.ch.DatagramChannelImpl#receiveIntoBuffer()
.
I don't understand well why the blocking mode eat so much CPU.
I have read some articles on using
Selector
s instead of blocking mode, but I don't really see why awhile (true)
withSelector
would be less consuming than a blocking channel.
回答1:
The problem is you are using NIO without Selector.
NIO without Selector is ok to use but then Channel.receive would be constantly trying to read which would show up as high CPU usage for one thread.
There are 2 solutions :-
- Use Selector to detect if there is something to read. Call channel.receive only when Selector indicates there is data to be read
- Use java.net.DatagramSocket/DatagramPacket to send/receive in blocking mode.
来源:https://stackoverflow.com/questions/21826013/datagramchannel-blocking-mode-and-cpu