Production environment is on Azure, using Redis Cache Standard 2.5GB
.
Example 1
System.Web.HttpUnhandledException (0
There are 3 scenarios that can cause timeouts, and it is hard to know which is in play:
As a best practice make sure you are using the following pattern to connect to the StackExchange Redis client:
private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() => {
return ConnectionMultiplexer.Connect("cachename.redis.cache.windows.net,ssl=true,abortConnect=false,password=password");
});
public static ConnectionMultiplexer Connection {
get {
return lazyConnection.Value;
}
}
If the above does not work, there are some more debugging routes described in Source 1, regarding region, bandwidth and NuGet package versions among others.
Another option could be to increase the minimum IO threads. It’s often recommend to set the minimum configuration value for IOCP and WORKER threads to something larger than the default value. There is no one-size-fits-all guidance on what this value should be because the right value for one application will be too high/low for another application. A good starting place is 200 or 300, then test and tweak as needed.
How to configure this setting:
minIoThreads
configuration setting under the <processModel> configuration element in machine.config. According to Microsoft, you can’t change this value per site by editing your web.config (even when you could do it in the past), so the value that you choose here is the value that all your .NET sites will use. Please note that you don’t need to add every property if you have autoConfig set to false, just putting autoConfig="false"
and overriding the value is enough:
<processModel autoConfig="false" minIoThreads="250" />
Important Note: the value specified in this configuration element is a per-core setting. For example, if you have a 4 core machine and want your minIOThreads setting to be 200 at runtime, you would use
<processModel minIoThreads="50"/>
.
ThreadPool.SetMinThreads()
method as described above.My guess is that there is an issue with network stability - thus the timeouts.
Since nobody has mentioned an increase in the responseTimeout
I would play around with it. The default value is 50ms which can be easily reached. I would try it around 200ms to see if that would help with teh messages.
Taken from the configuration options:
responseTimeout={int} ResponseTimeout SyncTimeout Time (ms) to decide whether the socket is unhealthy
There are multiple issues opened on this on github. The one combining all is probably #871 The "network stability" / 2.0 / "pipelines" rollup issue
One more thing: did you try to play around with ConnectionMultiplexer.ConnectAsync()
instead ConnectionMultiplexer.Connect()
?
Have the network traffic monitor switched on to confirm/deny the blip.have a solution to the issue but a crude one. Option 1 - try restarting the managed redis instamce in azure.