We fight the issue in production when once in a while our Azure SQL database performance significantly degrades. We know we have locks on one of the tables, but these locks are
There are 2 things you can check on you dbcontext objects to see if you are using them correctly and dispose object to return the connection to the connection pool.
First, you are creating the dbcontext from code. Check if there is a using statement around each creation scope of the dbcontext object. Something like:
using (var context = new xxxContext()) {
...
}
This will dispose the context when it goes out of scope automatically.
Second you are using dependency injection to inject the dbcontext object. Make sure you are using scoped:
services.AddScoped<xxxContext>(
Then the DI will take care of disposing your context objects.
The next thing you can check is if you have uncommitted transactions. Check if all you transactions are within using blocks, so they will commit or rollback when you are out of scope.
[this is more of a long comment than an answer]
I do have several hosts connected to the same database but each host has the same limitation of 200 connections
The connection pool is per (Connection String,AppDomain). Each Server might have multiple AppDomains. And each AppDomain will have one connection pool per connection string. So here if you have different user/password combos, they will generate different connection pools. So no real mystery why it is possible to have more than 200 connections.
So why are you getting lots of connections? Possible causes:
Connection Leaks.
If you are failing to Dispose a DbContext or a SqlConnection that connection will linger on the managed heap until finalized, and not be available for reuse. When a connection pool reaches its limit, new connection request will wait for 30sec for a connection to become available, and fail after that.
You will not see any waits or blocking on the server in this scenario. The sessions will all be idle, not waiting. And there would not be a large number of requests in
select *
from sys.dm_exec_requests
Note that Session Wait Stats are now live on Azure SQL DB, so it's much easier to see realtime blocking and waits.
select *
from sys.dm_exec_session_wait_stats
Blocking.
If incoming requests begin to be blocked by some transaction, and new requests keep starting, your number of sessions can grow, as new requests get new sessions, start requests and become blocked. Here you would see lots of blocked requests in
select *
from sys.dm_exec_requests
Slow Queries.
If requests were just talking a long time to finish due to resourse availability (CPU, Disk, Log), you could see this. But that's unlikely as your DTU usage is low during this time.
So the next step for you is to see if these connections are active on the server suggesting blocking, or idle on the server suggesting a connection pool problem.
The problem may related to "Pool fragmentation"
Pool fragmentation is a common problem in many Web applications where the application can create a large number of pools that are not freed until the process exits. This leaves a large number of connections open and consuming memory, which results in poor performance.
Pool Fragmentation Due to Integrated Security* Connections are pooled according to the connection string plus the user identity. Therefore, if you use Basic authentication or Windows Authentication on the Web site and an integrated security login, you get one pool per user. Although this improves the performance of subsequent database requests for a single user, that user cannot take advantage of connections made by other users. It also results in at least one connection per user to the database server. This is a side effect of a particular Web application architecture that developers must weigh against security and auditing requirements.
Source : https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling