I\'m writing a client/server application in C#, and it\'s going great. For now, everything works and it\'s all pretty robust. My problem is that I run into some delays when
It turns out my server and client were not completely symmetrical after all. I had noticed, but I didn't think it mattered at all. Apparently it's a huge deal. Specifically, the server did this:
ns.Write(sizePacket, 0, sizePacket.Length);
ns.Write(response, 0, response.Length);
Which I changed into this:
// ... concatenate sizePacket and response into one array, same as client code above
ns.Write(responseWithHeader, 0, responseWithHeader.Length);
And now the delay is completely gone, or at least it's no longer measurable in milliseconds. So that's something like a 100x speedup right there. \o/
It's still odd because it's writing exactly the same data to the socket as before, so I guess the socket receives some secret metadata during the write operation, which is then somehow communicated to the remote socket, which may interpret it as an opportunity to take a nap. Either that, or the first write puts the socket into receive mode, causing it to trip up when it's then asked to send again before it has received anything.
I suppose the implication would be that all of this example code you find lying around, which shows how to write to and read from sockets in fixed-size chunks (often preceded by a single int describing the size of the packet to follow, same as my first version), is failing to mention that there's a very heavy performance penalty for doing so.