问题
I have been told that SQL Native Client is supposed to be faster than the OLEDB drivers. So I put together a utility to do a load-test between the two - and am getting mixed results. Sometimes one is faster, sometimes the other is, no matter what the query may be (simple select, where clause, joining, order by, etc.). Of course the server does the majority of the workload, but I'm interested in the time it takes between the data coming into the PC to the time the data is accessible within the app.
The load tests consist of very small queries which return very large datasets. For example, I do select * from SysTables
and this table has 50,000+ records. After receiving the data, I do another load of looping through the results (using while not Q.eof ... Q.next ...
etc.). I've also tried adding some things to the query - such as order by Val
where Val
is a varchar(100)
field.
Here's a sample of my load tester, numbers on very bottom are averages...
So really, what are the differences between the two? I do know that OLE is very flexible and supports many different database engines, whereas Native Client is specific to SQL Server alone. But what else is going on behind the scenes? And how does that affect how Delphi uses these drivers?
This is specifically using ADO via the TADOConnection
component and TADOQuery
as well.
I'm not necessarily looking or asking for ways to improve performance - I just need to know what are the differences between the drivers.
回答1:
As stated by Microsoft:
SQL Server Native Client is a stand-alone data access application programming interface (API), used for both OLE DB and ODBC, that was introduced in SQL Server 2005. SQL Server Native Client combines the SQL OLE DB provider and the SQL ODBC driver into one native dynamic-link library (DLL).
From my understanding, ADO is just an Object Oriented application-level DB layer over OleDB. It will use OleDB in all cases. What changes is the provider used. If you specify the SQLNCLI10
provider, you'll use the latest version of the protocol. If you specify the SQLOLEDB
provider, you'll use the generic SQL Server 2000 + protocol.
As such:
ADO -> OleDB -> SQLNCLI10 provider -> MS SQL Server (MSSQL 2000, 2005 or 2008 protocol)
ADO -> OleDB -> SQLOLEDB provider -> MS SQL Server (MSSQL 2000 protocol)
About performance, I think you won't have a big difference. Like always, it will depend on the data processed.
But it is IMHO recommended to use best fitted provider for your database. Some kind of data (like var(maxchar)
or Int64
) is told to be best handled. And the SQLNCLI10
provider has been updated, so I guess it is more tuned.
回答2:
In your question you are mxing OLE and SQL Native Client. Probably you are mean few things in the same time:
- OLE -> OLEDB, which is obsolescent generic data access technology;
- OLE -> "SQL Server OLEDB Provider", which is SQL Server 2000 OLEDB provider;
- SQL Server Native Client, which is SQL Server 2005 and higher client software. And it includes as OLEDB provider, as ODBC driver.
If to talk about OLEDB providers and supported SQL Server versions, then:
- "SQL Server OLEDB Provider" (SQLOLEDB) supports SQL Server 2000 protocol;
- "SQL Server Native Client 9" (SQLNCLI) supports SQL Server 2000 and 2005 protocols;
- "SQL Server Native Client 10" supports SQL Server 2000, 2005 and 2008 protocols.
You did not sayd what SQL Server version you are using. In general, best is to use SQL Server OLEDB provider corresponding to your SQL Server version. Otherwise you can run into incompatibility between server and client versions.
Abstractly comparing, I can only speculate about differences between SQLNCLI and SQLOLEDB:
- One is more correctly uses server protocol;
- One is using more advanced protocol features;
- One performs more processing, what heps to handle more situations;
- One uses more generic / optimized data represenation.
Without correct benchmark application and environment it is hard to accept your comparision results, because they may depend on multiple factors.
回答3:
I think you should concentrate on optimizing the:
- sql server engine and database settings
- your queries
- your data schema
Difference in speed between connection libraries is so small, even negligible, that it may cause a very tiny slowdown of systems and in very specific scenarios
回答4:
Short answer:
It doesn't matter.
Long answer:
The difference in performance between the 2 client libs is relatively negligible compared to the Server execution + Network data transfer, which is what you are mostly measuring, hence the inconclusive test data. There is a good chance that you use the same low level layer in both cases anyway with only a minor difference in indirection on top of it.
As a matter of fact, if your tests show no visible difference, it just proves that the slowness is not related with the choice of the client lib and optimization should be searched elsewhere.
For your present test, you should use the SQL Profiler to measure the queries execution time on the Server at the same time you run your test, you would see that they vary also quite a bit. Subtracting those numbers from the test end results would give you the timing for the bundle Client time + Network transfer.
Network performance is quite variable and has more impact on your test than you would think. Try having someone streaming video at the same time you run your test and you will see... (Have had that with my former company; tuning the SQL was not the answer in this case )
回答5:
While it certainly could be at the database end, I think there is a lot to look at in the overall system - at least your test system. In general, it is hard to do timing if the work you are asking the database to do is very small compared to the overall work. So in general, is the database task a big job or simply the retrieval of one data item? Are you using stored procedures or simple queries? Is your test preparing any stored procedures before running the test? Do you get consistent times each time you run any test in sucession?
回答6:
- The query execution time tells you how well the database engine (and any schema/query optimization) work well. Here what you use doesn't matter. ODBC/OLEDB/Native whatever just passes the query along to the database and it is executed there
- The time it takes to read from the first record to the last tells you how well the data access layer and your network perfom. Here you time how well data are returned and "cached" on your client. Depending on the data, the network settings may be important. For example if your tables use "large" records, a larger MTU may requires less packets (and less roundtrips) to send them to the client.
Anyway, before looking for a solution, you have to identify the problem. Profile your application, both client side and server side (SQL Server has good tools for that), and find what exactly makes it slower. Then and only then you can look for the correct solution. Maybe the data access layer is not the problem. 20,000 records is a small dataset today, not a large one.
回答7:
You cannot use the native clients with ADO, as is.
ADO does not understand the XML
SQL Server data type. The field type:
field: ADOField;
field := recordset.Fields.Items["SomeXmlColumn"];
Attempting to access field.Value
throws an EOleException
:
- Source: Microsoft Cursor Engine
- ErrorCode: 0x80040E21 (E_ITF_0E21)
- Message: Multiple-step operation generated errors. Check each status value
The native client drivers (e.g. SQLNCLI
, SQLNCLI10
, SQLNCLI11
) present an Xml
data type to ADO as
field.Type_ = 141
While the legacy SQLOLEDB
driver presents an Xml
data type to ADO as adLongVarWChar, a unicode string:
field.Type_ = 203 //adLongVarWChar
And the VARIANT
contained in field.Value
is a WideString (technically known as a BSTR):
TVarData(field.Value).vtype = 8 //VT_BSTR
The solution, as noted by Microsoft:
Using ADO with SQL Server Native Client
Existing ADO applications can access and update XML, UDT, and large value text and binary field values using the SQLOLEDB provider. The new larger varchar(max), nvarchar(max), and varbinary(max) data types are returned as the ADO types adLongVarChar, adLongVarWChar and adLongVarBinary respectively. XML columns are returned as adLongVarChar, and UDT columns are returned as adVarBinary. However, if you use the SQL Server Native Client OLE DB provider (SQLNCLI11) instead of SQLOLEDB, you need to make sure to set the DataTypeCompatibility keyword to "80" so that the new data types will map correctly to the ADO data types.
They also note:
If you do not need to use any of the new features introduced in SQL Server 2005, there is no need to use the SQL Server Native Client OLE DB provider; you can continue using your current data access provider, which is typically SQLOLEDB.
回答8:
Also, besides the lack of support for the XML data type, Delphi ADO does not recognize columns defined in SQL Server as TIME (DBTYPE_DBTIME2=145)
or DATETIMEOFFSET (DBTYPE_DBTIMESTAMPOFFSET=146);
trying to use those fields in your application will cause multiple errors like 'Invalid Variant Value' or some controls (like TDBGrid) will simply drop the field entirely.
Seems like the lack of support for DBTYPE_DBTIME2=145
is a bug/QC-issue since there is already ftTime support (it's also not clear to me why SQL Server doesn't return DBTYPE_DBTIME
which Delphi does support), the XML and Offset types have no clear TFieldType mapping.
Data Type Support for OLE DB Date/Time Improvements
来源:https://stackoverflow.com/questions/9082907/delphi-with-sql-server-oledb-vs-native-client-drivers