I am getting delays on IPC on a single machine that has 10 cores and is running 47 instances of my ClientApp which are all communicating with the MasterApp.
I appear to get severe latency on occasion. Here is part of my log. The DateTime on the left is the log DateTime(a high perf logger). The DateTimes within the [] are the times the message was sent from the MasterApp. Each message terminates with an @.
So the first message is only 1ms behind, but the last is 71ms behind.
Any ideas what might cause this and what I might do to get rid of the latency
20141030T120401.015 [--------*MD|USD/JPY 109.032 109.034 1000000.00 1000000.00 20141030T120401014@]
20141030T120401.084 [--------*MD|EUR/CHF 1.20580 1.20588 3000000.00 2000000.00 20141030T120401019@]
20141030T120401.163 [--------*MD|USD/JPY 109.031 109.034 1000000.00 1000000.00 20141030T120401088@*MD|EUR/CHF 1.20580 1.20588 3000000.00 1000000.00 20141030T120401092@]
Code excerpt:
public void Connect(int port)
{
IPAddress[] aryLocalAddr = null;
String strHostName = "";
try
{
// NOTE: DNS lookups are nice and all but quite time consuming.
strHostName = Dns.GetHostName();
IPHostEntry ipEntry = Dns.GetHostByName(strHostName);
aryLocalAddr = ipEntry.AddressList;
}
catch (Exception ex)
{
OutputWriteLine("Error trying to get local address: " + ex.Message);
}
socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
socket.Blocking = false;
IPEndPoint epServer = new IPEndPoint(aryLocalAddr[0], port);
socket.BeginConnect(epServer, new AsyncCallback(ConnectCallBack), socket);
}
public void ConnectCallBack(IAsyncResult ar)
{
Socket socket = (Socket)ar.AsyncState;
NewConnection(socket);
}
public void NewConnection(Socket socket)
{
Connection mc = new Connection(socket);
connections.Add(mc);
//OutputWriteLine("Client " + mc.SessionID() + " joined");
DateTime now = DateTime.Now;
String intraMessage = "*IDENT|" + modelInitiatorApp.G.SLOTNAME;
modelInitiatorApp.SetConnected();
SendMessage(mc, intraMessage);
socket.BeginReceive(mc.stateObject.buffer, 0, mc.stateObject.buffer.Length, SocketFlags.None, new AsyncCallback(ReceivedCallBack), mc.stateObject);
}
public void ReceivedCallBack(IAsyncResult ar)
{
//
StateObject state = (StateObject)ar.AsyncState;
Socket socket = state.socket;
try
{
int bytesRead = socket.EndReceive(ar);
if (bytesRead > 0)
{
state.sb.Append(Encoding.ASCII.GetString(state.buffer, 0, bytesRead));
OutputWriteLine("[--------"+state.sb.ToString()+"]");
string[] contents = state.sb.ToString().Split('@');
int delimCount = state.sb.ToString().Count(x => x == '@');
for (int d = 0; d < delimCount; d++)
{
if (contents[d] != "")
OnMessage(state, contents[d]);
}
if (!state.sb.ToString().EndsWith("@"))
{
state.sb.Clear();
state.sb.Append(contents[contents.Count() - 1]);
}
else
{
state.sb.Clear();
}
socket.BeginReceive(state.buffer, 0, state.buffer.Length, SocketFlags.None, new AsyncCallback(ReceivedCallBack), state);
}
else
{
// If no data was received then the connection is probably dead
OutputWriteLine("Client " + state.SessionID() + " disconnected");
socket.Shutdown(SocketShutdown.Both);
socket.Close();
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Unusual error during Receive!");
}
}
Ok I think TCP is fine here and I still believe that you're going to have similar performance between named pipes and a local socket in this scenario (I'd be very interested to see benchmarks though). One thing I noticed though is your ReceivedCallback is calling EndReceive and then doing a bunch of work prior to calling BeginReceive again. This means that you could still be receiving data on the socket (quite likely since there's no latency on localhost) but you're not actually handling it. What you should consider is having socket.BeginReceive() be the first call after EndReceive() (and check for errors, connection closed, etc.) so that you don't have anything stacking up while you're processing data. Obviously you'll need to copy the buffer out first or use a buffer pool (I'd go with a buffer pool, personally) so that you don't clobber your data, but not having a pending BeginReceive() call and sitting there processing data will add some artificial latency since you're getting the data very quickly but you're just ignoring it. I've seen this type of thing happen before with web objects where deserialization of very large objects was being handled by the callback so I wouldn't be surprised if that was the case here. I'd say that's the next thing you should try and see if that changes the latency. It's entirely possible that I am incorrect but I think that refactoring your code as I've recommended here is worth your time to see if it fixes the issue.
Mutiple errors here.
First, TCP. Nothing against TCP but it is not low latency per default ant totally the wrong technology for anything on a single machine.
Switch to named pipes - that will use shared memory under the hoods. You also get rid of any lookup on the same machine.
http://msdn.microsoft.com/en-us/library/bb546085(v=vs.110).aspx
has sample code. properly coded you can approach the speed of RAM.
That said, you also should not send text around - dream up some binary coding or you waste time encoding and decoding.
You should not use DateTime for benchmarking performance because it is not accurate.
http://blogs.msdn.com/b/ericlippert/archive/2010/04/08/precision-and-accuracy-of-datetime.aspx
If the question you want to ask is about how long some operation took, and you want a high-precision, high-accuracy answer, then use the StopWatch class. It really does have nanosecond precision and accuracy that is close to its precision.
Remember, you don’t need to know what time it is to know how much time has elapsed. Those can be two different things entirely.
来源:https://stackoverflow.com/questions/26653624/why-do-i-get-ipc-delays-on-20-busy-machine