问题
Is there a valid reason to not use TcpListener
for implementing a high performance/high throughput TCP
server instead of SocketAsyncEventArgs
?
I've already implemented this high performance/high throughput TCP server using SocketAsyncEventArgs
went through all sort of headaches to handling those pinned buffers using a big pre-allocated byte
array and pools of SocketAsyncEventArgs
for accepting and receiving, putting together using some low level stuff and shiny smart code with some TPL Data Flow and some Rx and it works perfectly; almost text book in this endeavor - actually I've learnt more than 80% of these stuff from other-one's code.
However there are some problems and concerns:
- Complexity: I can not delegate any sort of modifications to this server to another member of the team. That bounds me to this sort of tasks and I can not pay enough attention to other parts of other projects.
- Memory Usage (pinned
byte
arrays): UsingSocketAsyncEventArgs
the pools are needed to be pre-allocated. So for handling 100000 concurrent connections (worse condition, even on different ports) a big pile of RAM is uselessly hovers there; pre-allocated (even if these conditions are met just at some times, server should be able to handle 1 or 2 such peaks everyday). TcpListener
actually works good: I actually had putTcpListener
into test (with some tricks like usingAcceptTcpClient
on a dedicated thread, and not theasync
version and then sending the accepted connections to aConcurrentQueue
and not creatingTask
s in-place and the like) and with latest version of .NET, it worked very well, almost as good asSocketAsyncEventArgs
, no data-loss and a low memory foot-print which helps with not wasting too much RAM on server and no pre-allocation is needed.
So why I do not see TcpListener
being used anywhere and everybody (including myself) is using SocketAsyncEventArgs
? Am I missing something?
回答1:
I see no evidence that this question is about TcpListener
at all. It seems you are only concerned with the code that deals with a connection that already has been accepted. Such a connection is independent of the listener.
SocketAsyncEventArgs
is a CPU-load optimization. I'm convinced you can achieve a higher rate of operations per second with it. How significant is the difference to normal APM/TAP async IO? Certainly less than an order of magnitude. Probably between 1.2x and 3x. Last time I benchmarked loopback TCP transaction rate I found that the kernel took about half of the CPU usage. That means your app can get at most 2x faster by being infinitely optimized.
Remember that SocketAsyncEventArgs
was added to the BCL in the year 2000 or so when CPUs were far less capable.
Use SocketAsyncEventArgs
only when you have evidence that you need it. It causes you to be far less productive. More potential for bugs.
Here's the template that your socket processing loop should look like:
while (ConnectionEstablished()) {
var someData = await ReadFromSocketAsync(socket);
await ProcessDataAsync(someData);
}
Very simple code. No callbacks thanks to await
.
In case you are concerned about managed heap fragmentation: Allocate a new byte[1024 * 1024]
on startup. When you want to read from a socket read a single byte into some free portion of this buffer. When that single-byte read completes you ask how many bytes are actually there (Socket.Available
) and synchronously pull the rest. That way you only pin a single rather small buffer and still can use async IO to wait for data to arrive.
This technique does not require polling. Since Socket.Available
can only increase without reading from the socket we do not risk to perform a read that is too small accidentally.
Alternatively, you can combat managed heap fragmentation by allocating few very big buffers and handing out chunks.
Or, if you don't find this to be a problem you don't need to do anything.
来源:https://stackoverflow.com/questions/21656077/socketasynceventargs-vs-tcplistener-tcpclient