SQL String comparison speed 'like' vs 'patindex'

前端 未结 3 789
无人共我
无人共我 2020-12-05 15:37

I had a query as follows (simplified)...

SELECT     *
FROM       table1 AS a
INNER JOIN table2 AS b ON (a.name LIKE \'%\' + b.name + \'%\')

相关标签:
3条回答
  • 2020-12-05 15:49

    That kind of repeatable difference in performance is most likely due to a difference in the execution plans for the two queries.

    Have SQL Server return the actual execution plan when each query is run, and compare the execution plans.

    Also, run each query twice, and throw out the timing for the first run, when you compare the performance of the two queries. (The first query run may include a lot of heavy lifting (statement parsing and database i/o). The second run will give you an elapsed time that is more validly compared to the other query.

    Can anyone explain why LIKE is so much slower than PATINDEX?

    The execution plan for each query will likely explain the difference.

    Is it simply a matter of how efficiently the two functions have been written?

    It's not really a matter of how efficiently the functions are written. What really matters is the generated execution plan. What matters is if the predicates are sargable and whether the optimizer chooses to use available indexes.


    [EDIT]

    In the quick test I ran, I see a difference in the execution plans. With the LIKE operator in the join predicate, the plan includes a "Table Spool (Lazy Spool)" operation on table2 after the "Computer Scalar" operation. With the PATINDEX function, I don't see a "Table Spool" operation in the plan. But the plans I'm getting may be significantly different than the plans you get, given differences in the queries, tables, indexes and statistics.

    [EDIT]

    The only difference I see in the execution plan output for the two queries (aside from expression placeholder names) is the calls to the three internal functions (LikeRangeStart, LikeRangeEnd, and LikeRangeInfo in place of one call to the PATINDEX function. These functions appear to be called for each row in a result set, and the resulting expression are used for scan of the inner table in a nested loop.

    So, it does look as if the three function calls for the LIKE operator could be more expensive (elapsed time wise) than the single call to the PATINDEX function. (The explain plan shows those functions being called for each row in the outer resultset of a nested loop join; for a large number of rows, even a slight difference in the elapsed time could be multiplied enough times to exhibit a significant performance difference.)


    After running some test cases on my system, I'm still baffled at the results you are seeing.

    Maybe it is an issue with the performance of the calls to the PATINDEX function vs. the calls to the three internal functions (LikeRangeStart, LikeRangeEnd, LikeRangeInfo.)

    It's possible that with those performed on a "large" enough result set, a small difference in elapsed time could be multiplied into a significant difference.

    But I actually find it to be somewhat surprising that a query using the LIKE operator would take significantly longer to execute than an equivalent query using the PATINDEX function.

    0 讨论(0)
  • 2020-12-05 15:51

    I'm not at all convinced by the thesis that it is the extra overhead of the LikeRangeStart, LikeRangeEnd, LikeRangeInfo functions that is responsible for the time discrepancy.

    It is simply not reproducible (at least in my test, default collation etc). When I try the following

    SET STATISTICS IO OFF;
    SET STATISTICS TIME OFF;
    
    DECLARE @T TABLE (name sysname )
    INSERT INTO @T
    SELECT TOP 2500 name + '...' + 
       CAST(ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS VARCHAR)
    FROM sys.all_columns
    
    SET STATISTICS IO ON;
    SET STATISTICS TIME ON;
    PRINT '***'
    SELECT     COUNT(*)
    FROM       @T AS a
    INNER JOIN @T AS b ON (a.name LIKE '%' + b.name + '%')
    
    PRINT '***'
    SELECT     COUNT(*)
    FROM       @T AS a
    INNER JOIN @T AS b ON (PATINDEX('%' + b.name + '%', a.name) > 0)
    

    Which gives essentially the same plan for both but also contains these various internal functions I get the following.

    LIKE

    Table '#5DB5E0CB'. Scan count 2, logical reads 40016
    CPU time = 26953 ms,  elapsed time = 28083 ms.
    

    PATINDEX

    Table '#5DB5E0CB'. Scan count 2, logical reads 40016
    CPU time = 28329 ms,  elapsed time = 29458 ms.
    

    I do notice however that if I substitute a #temp table instead of the table variable the estimated number of rows going into the stream aggregate is significantly different.

    The LIKE version has an estimated 330,596 and PATINDEX an estimated 1,875,000.

    I notice you also have a hash join in your plan. Possibly because the PATINDEX version seems to estimate a greater number of rows than LIKE this query gets a larger memory grant so doesn't have to spill the hash operation to disc. Try tracing the hash warnings in Profiler to see if this is the case.

    0 讨论(0)
  • 2020-12-05 15:55

    Perhaps this is a question of DB Caching...

    Try out reset cache before running each query using DBCC helpers:

    • DBCC DROPCLEANBUFFERS
    • DBCC FREEPROCCACHE
    0 讨论(0)
提交回复
热议问题