问题
I have a sql connection that I have to hit the database anywhere from 500 to 10,000 times a second. After about 250 per second things start to slow down and then the app gets so far behind it crashes.
I was thinking about putting the database into a dictionary. I need the fastest performance I can get. Currently the ado.net takes about 1 to 2 milliseconds but something happens that causes a bottleneck.
Is there anything wrong with the below syntax for the 10k queries per second? is a dictionary going to work? we are talking about 12 million records and I need to be able to search it within 1 to 5 milliseconds. I also have another collection in the database that has 50 million records so I'm not sure how to store it. any suggestions will be great.
The SQL db has 128 gb memory and 80 processors and the app is on the same server on the Sql server 2012
using (SqlConnection sqlconn = new SqlConnection(sqlConnection.SqlConnectionString()))
{
using (SqlCommand sqlcmd = new SqlCommand("", sqlconn))
{
sqlcmd.CommandType = System.Data.CommandType.StoredProcedure;
sqlcmd.Parameters.Clear();
sqlcmd.CommandTimeout = 1;
sqlconn.Open();
using (SqlDataReader sqlDR = sqlcmd.ExecuteReader(CommandBehavior.CloseConnection))
public static string SqlConnectionString()
{
return string.Format("Data Source={0},{1};Initial Catalog={2};User ID={3};Password={4};Application Name={5};Asynchronous Processing=true;MultipleActiveResultSets=true;Max Pool Size=524;Pooling=true;",
DataIP, port, Database, username, password, IntanceID);
}
the code below the datareader is
r.CustomerInfo = new CustomerVariable();
r.GatewayRoute = new List<RoutingGateway>();
while (sqlDR.Read() == true)
{
if (sqlDR["RateTableID"] != null)
r.CustomerInfo.RateTable = sqlDR["RateTableID"].ToString();
if (sqlDR["EndUserCost"] != null)
r.CustomerInfo.IngressCost = sqlDR["EndUserCost"].ToString();
if (sqlDR["Jurisdiction"] != null)
r.CustomerInfo.Jurisdiction = sqlDR["Jurisdiction"].ToString();
if (sqlDR["MinTime"] != null)
r.CustomerInfo.MinTime = sqlDR["MinTime"].ToString();
if (sqlDR["interval"] != null)
r.CustomerInfo.interval = sqlDR["interval"].ToString();
if (sqlDR["code"] != null)
r.CustomerInfo.code = sqlDR["code"].ToString();
if (sqlDR["BillBy"] != null)
r.CustomerInfo.BillBy = sqlDR["BillBy"].ToString();
if (sqlDR["RoundBill"] != null)
r.CustomerInfo.RoundBill = sqlDR["RoundBill"].ToString();
}
sqlDR.NextResult();
回答1:
Don't close and re-open the connection, you can keep it open between requests. Even if you have connection pooling turned on, there is certain overhead, including a brief critical section to prevent concurrency issues when seizing a connection from the pool. May as well avoid that.
Ensure your stored procedure has SET NOCOUNT ON to reduce chattiness.
Ensure you are using the minimum transaction isolation level you can get away with, e.g. dirty reads a.k.a NOLOCK. You can set this at the client end at the connection level or within the stored procedure itself, which ever you're more comfortable with.
Profile these transactions to ensure the bottleneck is on the client. Could be on the DB server or on the network.
If this is a multithreaded application (e.g. on the web), check your connection pool settings and ensure it's large enough. There's a PerfMon counter for this.
Access your fields by ordinal using strongly typed getters, e.g. GetString(0) or GetInt32(3).
Tweak the bejesus out of your stored procedure and indexes. Could write a book on this.
Reindex your tables during down periods, and fill up the index pages if this is a fairly static table.
If the purpose of the stored procedure is to retrieve a single row, try adding TOP 1 to the query so that it will stop loking after the first row is found. Also, consider using output parameters instead of a resultset, which incurs a little less overhead.
A dictionary could potentially work but it depends on the nature of the data, how you are searching it, and how wide the rows are. If you update your question with more information I'll edit my answer.
回答2:
If you're going to be accessing the DataReader in a loop, then you should find the indexes outside the loop, then use them inside of the loop. You might also do better to use the strongly-typed accesors.
回答3:
Well, if you have already measured that the ADO command takes only a couple of milliseconds, the other possible cause of delay is the string.Format to build the connectionstring
I would try to remove the string.Format that is called for every
using(SqlConnection cn = new SqlConnection(sqlConnection.SqlConnectionString()))
Instead, supposing the SqlConnectionString is in a separate class you could write
private static string conString = string.Empty;
public static string SqlConnectionString()
{
if(conString == "")
conString = string.Format("............");
return conString;
}
Of course, a benchmark could rule out this, but I am pretty sure that strings operations like that are costly
Seeing your comments below another thing very important to add is the correct declaration of your parameters. Instead of using AddWithValue (convenient, but with tricky side effects) declare your parameters with the correct size
using (SqlCommand sqlcmd = new SqlCommand("", sqlconn))
{
sqlcmd.CommandType = System.Data.CommandType.StoredProcedure;
sqlcmd.CommandText = mySql.GetLCR();
SqlParameter p1 = new SqlParameter("@GatewayID", SqlDbType.NVarChar, 20).Value = GatewayID;
SqlParameter p2 = new SqlParameter("@DialNumber", SqlDbType.NVarChar, 20).Value = dialnumber;
sqlCmd.Parameters.AddRange(new SqlParameter[] {p1, p2});
sqlcmd.CommandTimeout = 1;
sqlconn.Open();
.....
}
The AddWithValue is not recommended when you need to squeeze every milliseconds of performance. This very useful article explain why passing a string with AddWithValue destroy the works made by the optimizer of Sql Server. (In short, the optimizer calculates and stores a query plan for your command and, if it receives another identical command, it reuse the calculated query plan. But if you pass a string with addwithvalue, the size of the parameter is calculated every time based on the actual passed string length. The optimizer cannot reuse the query plan and recalculates and stores a new one)
回答4:
"I need the fastest performance I can get."
If you haven't done so already, review your business requirements, and how your application interacts with your data warehouse. If you have done this already, then please disregard this posting.
It has been my experience that:
- The fact that you are even executing a SQL query against a database means that you have an expense - queries cost time/cpu/memory.
- Queries are even more expensive if they include write operations.
The easiest way to save money, is not to spend it! So look for ways to:
- avoid querying the database in the first place
- ensure that queries execute as quickly as possible
STRATEGIES
- Make sure you are using the database indexes properly.
- Avoid SQL queries that result in a full table scan.
- Use connection pooling.
- If you are inserting data into the database, then use bulk uploads.
- Use caching where appropriate. Options include:
- caching results in memory (i.e. RAM)
- caching results to disk
- pre-render results ahead of time an read them instead of executing a new query
- instead of mining raw data with each query, consider generating summary data that could be queried instead.
- Partition your data. This can occur on several levels:
- most enterprise databases support partitioning strategies
- by reviewing your business model, you can partition your data across several databases (i.e. read/write operations against one DB, write operations against another DB).
- Review your application's design and then measure response times to confirm that the bottle neck is in fact where you believe it is.
CACHING TECHNOLOGIES
- Asp.net - caching
- memcached
- redis
- etc.
DISCLAIMER: I am not a database administrator (DBA).
回答5:
I don't think the issue is the string.format
Result is:
108 ms for the format
1416 ms for the open
5176 ms for the execute
and the whole thing is 6891 ms
run this, VERY simple test!
namespace ConsoleApplication1
{
class Program
{
private static string DataIP;
private static string Database;
private static string IntanceID;
static void Main(string[] args)
{
DataIP = @"FREDOU-PC\SQLEXPRESS"; Database = "testing"; IntanceID = "123";
int count = 0;
System.Diagnostics.Stopwatch swWholeThing = System.Diagnostics.Stopwatch.StartNew();
System.Diagnostics.Stopwatch swFormat = new System.Diagnostics.Stopwatch();
System.Diagnostics.Stopwatch swOpen = new System.Diagnostics.Stopwatch();
System.Diagnostics.Stopwatch swExecute = new System.Diagnostics.Stopwatch();
for (int i = 0; i < 100000; ++i)
{
using (System.Data.SqlClient.SqlConnection sqlconn = new System.Data.SqlClient.SqlConnection(SqlConnectionString(ref swFormat)))
{
using (System.Data.SqlClient.SqlCommand sqlcmd = new System.Data.SqlClient.SqlCommand("dbo.counttable1", sqlconn))
{
sqlcmd.CommandType = System.Data.CommandType.StoredProcedure;
sqlcmd.Parameters.Clear();
swOpen.Start();
sqlconn.Open();
swOpen.Stop();
swExecute.Start();
using (System.Data.SqlClient.SqlDataReader sqlDR = sqlcmd.ExecuteReader(System.Data.CommandBehavior.CloseConnection))
{
if (sqlDR.Read())
count += sqlDR.GetInt32(0);
}
swExecute.Stop();
}
}
}
swWholeThing.Stop();
System.Console.WriteLine("swFormat: " + swFormat.ElapsedMilliseconds);
System.Console.WriteLine("swOpen: " + swOpen.ElapsedMilliseconds);
System.Console.WriteLine("swExecute: " + swExecute.ElapsedMilliseconds);
System.Console.WriteLine("swWholeThing: " + swWholeThing.ElapsedMilliseconds + " " + count);
System.Console.ReadKey();
}
public static string SqlConnectionString(ref System.Diagnostics.Stopwatch swFormat)
{
swFormat.Start();
var str = string.Format("Data Source={0};Initial Catalog={1};Integrated Security=True;Application Name={2};Asynchronous Processing=true;MultipleActiveResultSets=true;Max Pool Size=524;Pooling=true;",
DataIP, Database, IntanceID);
swFormat.Stop();
return str;
}
}
}
dbo.counttable1 stored procedure:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
create PROCEDURE dbo.counttable1
AS
BEGIN
SET NOCOUNT ON;
SELECT count(*) as cnt from dbo.Table_1
END
GO
dbo.table_1
USE [testing]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Table_1](
[id] [int] NOT NULL,
CONSTRAINT [PK_Table_1] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
content:
insert into dbo.Table_1 (id) values (1)
insert into dbo.Table_1 (id) values (2)
insert into dbo.Table_1 (id) values (3)
insert into dbo.Table_1 (id) values (4)
insert into dbo.Table_1 (id) values (5)
insert into dbo.Table_1 (id) values (6)
insert into dbo.Table_1 (id) values (7)
insert into dbo.Table_1 (id) values (8)
insert into dbo.Table_1 (id) values (9)
insert into dbo.Table_1 (id) values (10)
回答6:
If you are handling millions of records and hitting the database anywhere from 500 to 10,000 times a second. I will recommend to create handler file (API) for data retrieving and you can find Load testing tools to test the API performance.
By using memcache performance can be increase, following are the step to implement the memcache
You have to create a window service that will retrieve data from database and store in memcache in JSON format as (key value pair).
For website create a handler file as an API that will retrieve data from memcache and display the result.
I have implemented this in one of my project it retrieves thousands of data in milliseconds
来源:https://stackoverflow.com/questions/21768699/ado-net-sql-server-performance-bottleneck