Our high throughput application (~1gbps) benefits greatly from a large ReceiveBufferSize and SendBufferSize.
I noticed on my machine I can have a 100 MB buffer size with
Actually for high performance networking the SO_RCVBUF and SO_SNDBUF options should be set to 0 to avoid buffer copies, as per KB181611:
If you use the SO_RCVBUF and SO_SNDBUF option to set zero TCP stack receive and send buffer, you basically instruct the TCP stack to directly perform I/O using the buffer provided in your I/O call. Therefore, in addition to the nonblocking advantage of the overlapped socket I/O, the other advantage is better performance because you save a buffer copy between the TCP stack buffer and the user buffer for each I/O call. But you have to make sure you don't access the user buffer once it's submitted for overlapped operation and before the overlapped operation completes.
The max values you can set these options (which are the real setting behind the managed Socket.ReceiveBufferSize) are 'implementation dependent'. Other TCP parameters are documented at TCP/IP Registry Settings.
Those two properties internally play with the socket options (via SetSocketOption
, eventually to the native setsockopt). If memory serves these are going to depend on the non-paged pool memory available (which changes machine to machine) and potentially which network driver is on each machine.
Regardless, you actually aren't guaranteed that the buffer size you requested is used, you'll have to retrieve the current buffer size after the fact to make sure it was used. Moreover, on Windows 7 and 2008 machines it is my understanding that your buffers may be dynamically increased/decreased.
In short, you likely can only test increasing buffer sizes and take the maximum that does not cause an error. There are too many variables at play which could determine the maximum size.