问题
My scenario is that I have a hundred small text files that I want to load, parse, and store in a DLL. Clients of the DLL are transient (command line programs), and I would prefer not to reload the data on every command line invocation.
So, I thought I would write a Windows server to store the data and have the clients query the server using TCP. But, the TCP performance was really slow. I wrote the following code using Stopwatch
to measure the socket setup time.
// time the TCP interaction to see where the time goes
var stopwatch = new Stopwatch();
stopwatch.Start();
// create and connect socket to remote host
client = new TcpClient (hostname, hostport); // auto-connects to server
Console.WriteLine ("Connected to {0}",hostname);
// get a stream handle from the connected client
netstream = client.GetStream();
// send the command to the far end
netstream.Write(sendbuf, 0, sendbuf.Length);
Console.WriteLine ("Sent command to far end: '{0}'",cmd);
stopwatch.Stop();
sendTime = stopwatch.ElapsedMilliseconds;
Much to my surprise, that little bit of code took 1,037 milliseconds (1 second) to execute. I expected the time to be far smaller. Is that a normal socket setup time between a client and server running on a modern Windows 10 localhost?
To compare, I wrote a loop that loaded 10 files x 100 lines each, and that experiment only took 1ms. So, it was 1000x faster reading from disk (an SSD) than it was to use sockets to a server.
I know what to do in my scenario (use file reads on each invocation), but I would like to know if anyone can confirm these kinds of timings for socket setup times. Or maybe there are faster interprocess communication mechanisms for a local machine that would compare favorably with file reads/parses. I really don't want to believe that File.ReadAllLines(filepath)
is the fastest way when spread over hundreds of command-line client invocations.
EDIT - Avoid DNS lookup by using explict IPEndPoint address
Following the comments below, I replaced "localhost" with an IPEndpoint method to set up the connection. The change reduced the 1037ms to about 20ms, but (1) the TcpClient would not automatically connect, and (2) the sending of text failed to reach the server. So, there must be something different between the original and IPEndPoint methods.
// new IPEndPoint method
// fast at 20ms, but the server never sees the sent text
string serverIP = "127.0.0.1";
IPAddress address = IPAddress.Parse (serverIP);
IPEndPoint remoteEP = new IPEndPoint(address, hostport);
client = new TcpClient(remoteEP);
client.Connect (remoteEP); // new; required w IPEndPoint method
// send text command to the far end
netstream = client.GetStream();
netstream.Write(sendbuf, 0, sendbuf.Length);
Console.WriteLine ("Sent command to far end: '{0}'",cmd);
stopwatch.Stop();
sendTime = stopwatch.ElapsedMilliseconds;
Console.WriteLine ($"Milliseconds for sending by TCP: '{sendTime}'");
// unfortunately, the server never sees the sent text now
I don't know why using an IPEndPoint as an input argument to TcpClient requires an explicit connect when TcpClient would automatically connect before. And I don't know why the netstream.Write
fails now too. Examples on the net always use socket.Connect
and socket.Send
with IPEndPoints.
EDIT #2 - Use IPEndPoint with sockets, not streams
// use sockets, not streams
// This code takes 3 seconds to send text to the server
// But at least this code works. The original code was faster at 1 second.
string serverIP = "127.0.0.1";
IPAddress address = IPAddress.Parse(serverIP);
IPEndPoint remoteEP = new IPEndPoint(address, hostport);
socket = new Socket (AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.Tcp);
socket.Connect (remoteEP);
socket.Send (sendbuf);
EDIT #3 - After experiments based on Evk comments:
Using the information provided by Evk above, I did several experiments as follows. Three clients and two servers were used.
Client 1: DNS returns only IPv4 using new TcpClient().
Client 2: DNS returns only Ipv6 using new TcpClient(AddressFamily.InternetworkV6)
Client 3: DNS returns IPv4 and IPv6 using new TcpClient(“localhost”,port)
Server 1: IPv4 new TcpListener(IPAddress.Loopback, port)
Server 2: IPv6 new TcpListener(IPAddress.IPv6Loopback, port)
From worst to best, the 6 possible pairs returned the following results:
c4xs6 - Client 1 ip4 with Server 2 ip6 – connection actively refused.
c6xs4 - Client 2 ip6 with Server 1 ip4 – connection actively refused.
c46xs4 - Client 3 (both) with Server 1 ip4, always delayed 1000ms because client tried using IPv6 before timing out and trying ip4, which worked consistently. This was the original code in this post.
C46xs6 - Client 3 (both) with Server 2 ip6, after a fresh restart of both, was fast on the first try (21ms) and on subsequent closely-spaced tries. But after waiting a minute or three, the next try was 3000ms, followed by fast 20ms times on closely-spaced subsequent tries.
C4xs4 – Same behavior as above. First try after a fresh restart was fast, as were closely-spaced subsequent tries. But after waiting a minute or two, the next try was 3000ms, followed by fast (20ms) closely-spaced subsequent tries.
C6xS6 – Same behavior as above. Fast after a fresh server reboot, but after a minute or two, a delay try (3000ms) followed by fast (20ms) responses to closely-spaced tries.
My experiments showed no consistently fast responses over time. There must be some kind of a delay or timeout or sleeping behavior when the connections go idle. I use netstream.Close; client.Close();
to close each connection on each try. (Is that right?) I don’t know what could be causing the delayed responses after a minute or two of idle no-active-connection time.
Any idea what might be causing the delay after a minute or two of idle listening time? The client is supposedly out of the system memory, having exited the console program. The server is supposedly doing nothing new, just listening for another connection.
回答1:
No, 1 second to establish connection to localhost is not expected perfomance. The problem in your case is not DNS lookup by itself. DNS lookup of localhost takes no time (few milliseconds maybe) and certainly cannot take 1 second. Below I assume that your TCP server is bound only to IpV4 loopback (127.0.0.1
), for example like this:
var server = new TcpListener(IPAddress.Loopback, port);
When you initialize client like this:
new TcpClient("localhost", port)
It queries DNS (which takes no time) and DNS returns 2 ip addresses: ::1
(IpV6 localhost) and 127.0.0.1
(IpV4 localhost). It has no idea whether it needs to use IpV4 or IpV6 address. So it tries both (with preference of IpV6). That 1 second delay you observe is time it needs to realize that connection to ::1
(IpV6 localhost) fails.
If you initialize client like this:
var client = new TcpClient();
It's the same as:
// InterNetwork means IpV4
var client = new TcpClient(AddressFamily.InterNetwork);
Both those versions will bind client to local IpV4 socket. That means when you later do:
client.Connect("localhost", port);
There is no need for client to try IpV6 localhost address, because local socket is IpV4. Both this versions will remove 1 second delay you observe. Another option to remove a delay is to bind your server to ipv6 loopback (to IPAddress.IPv6Loopback
).
Note that this:
IPEndPoint remoteEP = new IPEndPoint(address, hostport);
client = new TcpClient(remoteEP);
Is just wrong. This overload of TcpClient
constructor expects local endpoint, not remote. In your example that should just throw exception (port already in use) on either client or server, because you are trying to bind to the same ip and port on both server and client. If you want to connect directly without DNS lookup (which takes 0 time for localhost anyway, but might be important when you connect to real server), do this:
IPEndPoint remoteEP = new IPEndPoint(address, hostport);
client = new TcpClient();
client.Connect(remoteEP);
来源:https://stackoverflow.com/questions/49341481/are-tcp-setup-times-this-slow-1-second-typically