We\'re using SQL Server 2005 to track a fair amount of constantly incoming data (5-15 updates per second). We noticed after it has been in production for a couple months that on
A looping approach should use multiple seeks (but loses some parallelism). It might be worth a try for cases with relatively few distinct values compared to the total number of rows (low cardinality).
Idea was from this question:
select typeName into #Result from Types where 1=0;
declare @t varchar(100) = (select min(typeName) from Types);
while @t is not null
begin
set @t = (select top 1 typeName from Types where typeName > @t order by typeName);
if (@t is not null)
insert into #Result values (@t);
end
select * from #Result;
And looks like there are also some other methods (notably the recursive CTE @Paul White):
different-ways-to-find-distinct-values-faster-methods
sqlservercentral Topic873124-338-5