We are using a Windows Service to make very frequent HTTP calls to an internal REST service (20-40 calls per second), but notice a long delay in getting responses after the serv
It's probably related to the same issue that many users run into when using the HttpClient class and classes that use the HttpClient internally. As a result of the HttpClient implementing the IDisposable interface, you'll find many developers having the urge to wrap any new instances of the class in a using statement. There is just one problem with that; When the HttpClient object gets disposed of the attached ports remain blocked for 5 minutes 'TIME_WAIT' until they get released by the OS.
I usually use a single HttpClient instance (singleton) and use fully qualified URLs in combination with asynchronous calls.
There is a limit in the number of simultaneous outgoing HTTP connections. You can control this by using the System.Net.ServicePointManager.DefaultConnectionLimit
static property before creating the HttpWebRequest
objects
It might be worthwhile setting this to a higher value than the default which I believe is 2.
If this does not help then you can also increase the default ThreadPool
size to allow you to create more requests quicker. The thread pool only ramps up its number of threads gradually - a new thread per every half second, IIRC
How can we ensure that ports are reused? Not set the connection limit to a value that almost guarantees that they won't be.
It looks like someone has monkeyed with the ServicePointManager at some point. I'd limit the ServicePoint for this origin: to encourage http pipelining and connection reuse:
ServicePointManager.FindServicePoint(Uri).ConnectionLimit = someSensibleValue;