We\'re using haproxy in front of a netty-3.6-run backend. We are handling a huge number of connections, some of which can be longstanding.
Now the problem is that when h
Note : As per my understanding, You don't have to worry about Connection Reset Exception's, unless you've a Connection Pooling at your end with Keep-Alive Connections.
I faced a similar issue with lots of Connection Reset (RST) (It used to be 5-20times in a window of 10seconds, based on load) while using HAProxy for our services.
This is how I fixed it.
We had a system where connections are always kept-alive (keep-alive is always true at HTTP connection level. i.e., Once a connection is established, we reuse this connection from HTTP Connection pool for subsequent calls instead of creating new ones.)
Now, As per my debugging in Code and TCP Dump I found RST's were thrown from HAProxy in below scenario's
When HAProxy's timeout client or timeout server had reached, on an Idle Connection.
This configuration was set as 60seconds for us. Since we have a pool of connections, when the load on server decreases it would result in some of these connections not getting used for a minute.
So these connection's were then closed by HAProxy using a RST Signal.
When HAProxy's option prefer-last-server was not set.
As per the Docs:
The real use is for keep-alive connections sent to servers. When this option is used, haproxy will try to reuse the same connection that is attached to the server instead of rebalancing to another server, causing a close of the connection.
Since this was not set, everytime a connection was re-used from the pool, HAProxy used to Close this connection using RST Signal and create a new one to a different server (As our Load Balancer was set to round-robin). This was messing up and rendering the entire Connection Pooling useless.
So the Configuration that worked Fine:
With these configurations, we could
Hope this helps!!
Try with
not sure of the redispatch, but http-tunnel fixed the issue on our end.
As of haproxy 1.5 it now sends FIN
(FIN,ACK
) to the backend server whereas harpoxy 1.4 used to send a RST
. That will probably help in this scenario.
If it can find this documented I will add the link...
'Connecton reset by peer' is usually caused by writing to a connection that had already been closed by the other end. That causes the peer to send an RST. But it almost certainly had already sent a FIN. I would re-examine your assumptions here. Very few applications deliberately send RSTs. What you are most probably encountering is an application protocol error. If that's unavoidable,so is the ECONNRESET.
The Tomcat Nio-handler just does:
} catch (java.net.SocketException e) {
// SocketExceptions are normal
Http11NioProtocol.log.debug
(sm.getString
("http11protocol.proto.socketexception.debug"), e);
} catch (java.io.IOException e) {
// IOExceptions are normal
Http11NioProtocol.log.debug
(sm.getString
("http11protocol.proto.ioexception.debug"), e);
}
So it seems like the initial throw by the internal sun-classes (sun.nio.ch.FileDispatcherImpl) really is inevitable unless you reimplement them yourself.