I am experimenting with a program that inserts data into an SQL 2005 Server database (on XP SP3) at high rate of speed. (This is for collecting timing data so I can evalua
DATETIME
is stored as 2 integers. One representing the date part and the other the time part (number of ticks after midnight) each tick is 1/300 of a second so it has at least a theoretical precision of 3.3 milliseconds.
I just tried running this on my machine
declare @d varchar(24)
while 1=1
begin
set @d=CONVERT(VARCHAR(24), GETDATE(), 113)
raiserror('%s',0,1, @d) with nowait
end
And got a fairly lengthy run where it did go up one tick at a time so I don't think there can be any inherent limitation that prevents it achieving that.
01 Aug 2010 00:56:53:913
...
01 Aug 2010 00:56:53:913
01 Aug 2010 00:56:53:917
...
01 Aug 2010 00:56:53:917
01 Aug 2010 00:56:53:920
...
01 Aug 2010 00:56:53:920
01 Aug 2010 00:56:53:923
Regarding your query about GetDate()
precision in SQL Server 2008 this is the same as for SQL2005. sysdatetime
is meant to have higher precision. I just tried running the following and was surprised by the discrepancy between the two results.
SET NOCOUNT ON
CREATE TABLE #DT2(
[D1] [datetime2](7) DEFAULT (getdate()),
[D2] [datetime2](7) DEFAULT (sysdatetime())
)
GO
INSERT INTO #DT2
DEFAULT VALUES
GO 100
SELECT DISTINCT [D1],[D2],DATEDIFF(MICROSECOND, [D1], [D2]) AS MS
FROM #DT2
Results
D1 D2 MS
---------------------------- ----------------------- ------
2010-08-01 18:45:26.0570000 2010-08-01 18:45:26.0625000 5500
2010-08-01 18:45:26.0600000 2010-08-01 18:45:26.0625000 2500
2010-08-01 18:45:26.0630000 2010-08-01 18:45:26.0625000 -500
2010-08-01 18:45:26.0630000 2010-08-01 18:45:26.0781250 15125
2010-08-01 18:45:26.0670000 2010-08-01 18:45:26.0781250 11125
2010-08-01 18:45:26.0700000 2010-08-01 18:45:26.0781250 8125