Levenshtein distance in T-SQL

后端 未结 6 661
Happy的楠姐
Happy的楠姐 2020-11-22 06:30

I am interested in algorithm in T-SQL calculating Levenshtein distance.

相关标签:
6条回答
  • 2020-11-22 07:01

    In TSQL the best and fastest way to compare two items are SELECT statements that join tables on indexed columns. Therefore this is how I suggest to implement the editing distance if you want to benefit from the advantages of a RDBMS engine. TSQL Loops will work too, but Levenstein distance calculations will be faster in other languages than in TSQL for large volume comparisons.

    I have implemented the editing distance in several systems using series of Joins against temporary tables designed for that purpose only. It requires some heavy pre-processing steps - the preparation of the temporary tables - but it works very well with large number of comparisons.

    In a few words: the pre-processing consists of creating, populating and indexing temp tables. The first one contains reference ids, a one-letter column and a charindex column. This table is populated by running a series of insert queries that split every word into letters (using SELECT SUBSTRING) to create as many rows as word in the source list have letters (I know, that's a lot of rows but SQL server can handle billions of rows). Then make a second table with a 2-letter column, another table with a 3-letter column, etc. The end results is a series of tables which contain reference ids and substrings of each the words, as well a the reference of their position in the word.

    Once this is done, the whole game is about duplicating these tables and joining them against their duplicate in a GROUP BY select query which counts the number of matches. This creates a series of measures for every possible pair of words, which are then re-aggregated into a single Levenstein distance per pair of words.

    Technically this is very different than most other implementations of the Levenstein distance (or its variants) so you need to deeply understand how the Levenstein distance works and why it was designed as it is. Investigate the alternatives as well because with that method you end up with a series of underlying metrics which can help calculate many variants of the editing distance at the same time, providing you with interesting machine learning potential improvements.

    Another point already mentioned by previous answers in this page: try to pre process as much as possible to eliminate the pairs that do not require distance measurement. For example a pair of two words that have not a single letter in common should be excluded, because the editing distance can be obtained from the length of the strings. Or do not measure the distance between two copies of the same word, since it is 0 by nature. Or remove duplicates before doing the measurement, if your list of words comes from a long text it is likely that the same words will appear more than once, so measuring the distance only once will save processing time, etc.

    0 讨论(0)
  • 2020-11-22 07:02

    IIRC, with SQL Server 2005 and later you can write stored procedures in any .NET language: Using CLR Integration in SQL Server 2005. With that it shouldn't be hard to write a procedure for calculating Levenstein distance.

    A simple Hello, World! extracted from the help:

    using System;
    using System.Data;
    using Microsoft.SqlServer.Server;
    using System.Data.SqlTypes;
    
    public class HelloWorldProc
    {
        [Microsoft.SqlServer.Server.SqlProcedure]
        public static void HelloWorld(out string text)
        {
            SqlContext.Pipe.Send("Hello world!" + Environment.NewLine);
            text = "Hello world!";
        }
    }
    

    Then in your SQL Server run the following:

    CREATE ASSEMBLY helloworld from 'c:\helloworld.dll' WITH PERMISSION_SET = SAFE
    
    CREATE PROCEDURE hello
    @i nchar(25) OUTPUT
    AS
    EXTERNAL NAME helloworld.HelloWorldProc.HelloWorld
    

    And now you can test run it:

    DECLARE @J nchar(25)
    EXEC hello @J out
    PRINT @J
    

    Hope this helps.

    0 讨论(0)
  • 2020-11-22 07:06

    You can use Levenshtein Distance Algorithm for comparing strings

    Here you can find a T-SQL example at http://www.kodyaz.com/articles/fuzzy-string-matching-using-levenshtein-distance-sql-server.aspx

    CREATE FUNCTION edit_distance(@s1 nvarchar(3999), @s2 nvarchar(3999))
    RETURNS int
    AS
    BEGIN
     DECLARE @s1_len int, @s2_len int
     DECLARE @i int, @j int, @s1_char nchar, @c int, @c_temp int
     DECLARE @cv0 varbinary(8000), @cv1 varbinary(8000)
    
     SELECT
      @s1_len = LEN(@s1),
      @s2_len = LEN(@s2),
      @cv1 = 0x0000,
      @j = 1, @i = 1, @c = 0
    
     WHILE @j <= @s2_len
      SELECT @cv1 = @cv1 + CAST(@j AS binary(2)), @j = @j + 1
    
     WHILE @i <= @s1_len
     BEGIN
      SELECT
       @s1_char = SUBSTRING(@s1, @i, 1),
       @c = @i,
       @cv0 = CAST(@i AS binary(2)),
       @j = 1
    
      WHILE @j <= @s2_len
      BEGIN
       SET @c = @c + 1
       SET @c_temp = CAST(SUBSTRING(@cv1, @j+@j-1, 2) AS int) +
        CASE WHEN @s1_char = SUBSTRING(@s2, @j, 1) THEN 0 ELSE 1 END
       IF @c > @c_temp SET @c = @c_temp
       SET @c_temp = CAST(SUBSTRING(@cv1, @j+@j+1, 2) AS int)+1
       IF @c > @c_temp SET @c = @c_temp
       SELECT @cv0 = @cv0 + CAST(@c AS binary(2)), @j = @j + 1
     END
    
     SELECT @cv1 = @cv0, @i = @i + 1
     END
    
     RETURN @c
    END
    

    (Function developped by Joseph Gama)

    Usage :

    select
     dbo.edit_distance('Fuzzy String Match','fuzzy string match'),
     dbo.edit_distance('fuzzy','fuzy'),
     dbo.edit_distance('Fuzzy String Match','fuzy string match'),
     dbo.edit_distance('levenshtein distance sql','levenshtein sql server'),
     dbo.edit_distance('distance','server')
    

    The algorithm simply returns the stpe count to change one string into other by replacing a different character at one step

    0 讨论(0)
  • 2020-11-22 07:08

    I was looking for a code example for the Levenshtein algorithm, too, and was happy to find it here. Of course I wanted to understand how the algorithm is working and I was playing around a little bit with one of the above examples I was playing around a little bit that was posted by Veve. In order to get a better understanding of the code I created an EXCEL with the Matrix.

    distance for FUZZY compared with FUZY

    Images say more than 1000 words.

    With this EXCEL I found that there was potential for additional performance optimization. All values in the upper right red area do not need to be calculated. The value of each red cell results in the value of the left cell plus 1. This is because, the second string will be always longer in that area than the first one, what increases the distance by the value of 1 for each character.

    You can reflect that by using the statement IF @j <= @i and increasing the value of @i Prior to this statement.

    CREATE FUNCTION [dbo].[f_LevenshteinDistance](@s1 nvarchar(3999), @s2 nvarchar(3999))
        RETURNS int
        AS
        BEGIN
           DECLARE @s1_len  int;
           DECLARE @s2_len  int;
           DECLARE @i       int;
           DECLARE @j       int;
           DECLARE @s1_char nchar;
           DECLARE @c       int;
           DECLARE @c_temp  int;
           DECLARE @cv0     varbinary(8000);
           DECLARE @cv1     varbinary(8000);
    
           SELECT
              @s1_len = LEN(@s1),
              @s2_len = LEN(@s2),
              @cv1    = 0x0000  ,
              @j      = 1       , 
              @i      = 1       , 
              @c      = 0
    
           WHILE @j <= @s2_len
              SELECT @cv1 = @cv1 + CAST(@j AS binary(2)), @j = @j + 1;
    
              WHILE @i <= @s1_len
                 BEGIN
                    SELECT
                       @s1_char = SUBSTRING(@s1, @i, 1),
                       @c       = @i                   ,
                       @cv0     = CAST(@i AS binary(2)),
                       @j       = 1;
    
                    SET @i = @i + 1;
    
                    WHILE @j <= @s2_len
                       BEGIN
                          SET @c = @c + 1;
    
                          IF @j <= @i 
                             BEGIN
                                SET @c_temp = CAST(SUBSTRING(@cv1, @j + @j - 1, 2) AS int) + CASE WHEN @s1_char = SUBSTRING(@s2, @j, 1) THEN 0 ELSE 1 END;
                                IF @c > @c_temp SET @c = @c_temp
                                SET @c_temp = CAST(SUBSTRING(@cv1, @j + @j + 1, 2) AS int) + 1;
                                IF @c > @c_temp SET @c = @c_temp;
                             END;
                          SELECT @cv0 = @cv0 + CAST(@c AS binary(2)), @j = @j + 1;
                       END;
                    SET @cv1 = @cv0;
              END;
           RETURN @c;
        END;
    
    0 讨论(0)
  • 2020-11-22 07:12

    I implemented the standard Levenshtein edit distance function in TSQL with several optimizations that improves the speed over the other versions I'm aware of. In cases where the two strings have characters in common at their start (shared prefix), characters in common at their end (shared suffix), and when the strings are large and a max edit distance is provided, the improvement in speed is significant. For example, when the inputs are two very similar 4000 character strings, and a max edit distance of 2 is specified, this is almost three orders of magnitude faster than the edit_distance_within function in the accepted answer, returning the answer in 0.073 seconds (73 milliseconds) vs 55 seconds. It's also memory efficient, using space equal to the larger of the two input strings plus some constant space. It uses a single nvarchar "array" representing a column, and does all computations in-place in that, plus some helper int variables.

    Optimizations:

    • skips processing of shared prefix and/or suffix
    • early return if larger string starts or ends with entire smaller string
    • early return if difference in sizes guarantees max distance will be exceeded
    • uses only a single array representing a column in the matrix (implemented as nvarchar)
    • when a max distance is given, time complexity goes from (len1*len2) to (min(len1,len2)) i.e. linear
    • when a max distance is given, early return as soon as max distance bound is known not to be achievable

    Here is the code (updated 1/20/2014 to speed it up a bit more):

    -- =============================================
    -- Computes and returns the Levenshtein edit distance between two strings, i.e. the
    -- number of insertion, deletion, and sustitution edits required to transform one
    -- string to the other, or NULL if @max is exceeded. Comparisons use the case-
    -- sensitivity configured in SQL Server (case-insensitive by default).
    -- 
    -- Based on Sten Hjelmqvist's "Fast, memory efficient" algorithm, described
    -- at http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm,
    -- with some additional optimizations.
    -- =============================================
    CREATE FUNCTION [dbo].[Levenshtein](
        @s nvarchar(4000)
      , @t nvarchar(4000)
      , @max int
    )
    RETURNS int
    WITH SCHEMABINDING
    AS
    BEGIN
        DECLARE @distance int = 0 -- return variable
              , @v0 nvarchar(4000)-- running scratchpad for storing computed distances
              , @start int = 1      -- index (1 based) of first non-matching character between the two string
              , @i int, @j int      -- loop counters: i for s string and j for t string
              , @diag int          -- distance in cell diagonally above and left if we were using an m by n matrix
              , @left int          -- distance in cell to the left if we were using an m by n matrix
              , @sChar nchar      -- character at index i from s string
              , @thisJ int          -- temporary storage of @j to allow SELECT combining
              , @jOffset int      -- offset used to calculate starting value for j loop
              , @jEnd int          -- ending value for j loop (stopping point for processing a column)
              -- get input string lengths including any trailing spaces (which SQL Server would otherwise ignore)
              , @sLen int = datalength(@s) / datalength(left(left(@s, 1) + '.', 1))    -- length of smaller string
              , @tLen int = datalength(@t) / datalength(left(left(@t, 1) + '.', 1))    -- length of larger string
              , @lenDiff int      -- difference in length between the two strings
        -- if strings of different lengths, ensure shorter string is in s. This can result in a little
        -- faster speed by spending more time spinning just the inner loop during the main processing.
        IF (@sLen > @tLen) BEGIN
            SELECT @v0 = @s, @i = @sLen -- temporarily use v0 for swap
            SELECT @s = @t, @sLen = @tLen
            SELECT @t = @v0, @tLen = @i
        END
        SELECT @max = ISNULL(@max, @tLen)
             , @lenDiff = @tLen - @sLen
        IF @lenDiff > @max RETURN NULL
    
        -- suffix common to both strings can be ignored
        WHILE(@sLen > 0 AND SUBSTRING(@s, @sLen, 1) = SUBSTRING(@t, @tLen, 1))
            SELECT @sLen = @sLen - 1, @tLen = @tLen - 1
    
        IF (@sLen = 0) RETURN @tLen
    
        -- prefix common to both strings can be ignored
        WHILE (@start < @sLen AND SUBSTRING(@s, @start, 1) = SUBSTRING(@t, @start, 1)) 
            SELECT @start = @start + 1
        IF (@start > 1) BEGIN
            SELECT @sLen = @sLen - (@start - 1)
                 , @tLen = @tLen - (@start - 1)
    
            -- if all of shorter string matches prefix and/or suffix of longer string, then
            -- edit distance is just the delete of additional characters present in longer string
            IF (@sLen <= 0) RETURN @tLen
    
            SELECT @s = SUBSTRING(@s, @start, @sLen)
                 , @t = SUBSTRING(@t, @start, @tLen)
        END
    
        -- initialize v0 array of distances
        SELECT @v0 = '', @j = 1
        WHILE (@j <= @tLen) BEGIN
            SELECT @v0 = @v0 + NCHAR(CASE WHEN @j > @max THEN @max ELSE @j END)
            SELECT @j = @j + 1
        END
    
        SELECT @jOffset = @max - @lenDiff
             , @i = 1
        WHILE (@i <= @sLen) BEGIN
            SELECT @distance = @i
                 , @diag = @i - 1
                 , @sChar = SUBSTRING(@s, @i, 1)
                 -- no need to look beyond window of upper left diagonal (@i) + @max cells
                 -- and the lower right diagonal (@i - @lenDiff) - @max cells
                 , @j = CASE WHEN @i <= @jOffset THEN 1 ELSE @i - @jOffset END
                 , @jEnd = CASE WHEN @i + @max >= @tLen THEN @tLen ELSE @i + @max END
            WHILE (@j <= @jEnd) BEGIN
                -- at this point, @distance holds the previous value (the cell above if we were using an m by n matrix)
                SELECT @left = UNICODE(SUBSTRING(@v0, @j, 1))
                     , @thisJ = @j
                SELECT @distance = 
                    CASE WHEN (@sChar = SUBSTRING(@t, @j, 1)) THEN @diag                    --match, no change
                         ELSE 1 + CASE WHEN @diag < @left AND @diag < @distance THEN @diag    --substitution
                                       WHEN @left < @distance THEN @left                    -- insertion
                                       ELSE @distance                                        -- deletion
                                    END    END
                SELECT @v0 = STUFF(@v0, @thisJ, 1, NCHAR(@distance))
                     , @diag = @left
                     , @j = case when (@distance > @max) AND (@thisJ = @i + @lenDiff) then @jEnd + 2 else @thisJ + 1 end
            END
            SELECT @i = CASE WHEN @j > @jEnd + 1 THEN @sLen + 1 ELSE @i + 1 END
        END
        RETURN CASE WHEN @distance <= @max THEN @distance ELSE NULL END
    END
    

    As mentioned in the comments of this function, the case sensitivity of the character comparisons will follow the collation that's in effect. By default, SQL Server's collation is one that will result in case insensitive comparisons. One way to modify this function to always be case sensitive would be to add a specific collation to the two places where strings are compared. However, I have not thoroughly tested this, especially for side effects when the database is using a non-default collation. These are how the two lines would be changed to force case sensitive comparisons:

        -- prefix common to both strings can be ignored
        WHILE (@start < @sLen AND SUBSTRING(@s, @start, 1) = SUBSTRING(@t, @start, 1) COLLATE SQL_Latin1_General_Cp1_CS_AS) 
    

    and

                SELECT @distance = 
                    CASE WHEN (@sChar = SUBSTRING(@t, @j, 1) COLLATE SQL_Latin1_General_Cp1_CS_AS) THEN @diag                    --match, no change
    
    0 讨论(0)
  • 2020-11-22 07:24

    Arnold Fribble had two proposals on sqlteam.com/forums

    • one from june 2005 and
    • another updated one from may 2006

    This is the younger one from 2006:

    SET QUOTED_IDENTIFIER ON 
    GO
    SET ANSI_NULLS ON 
    GO
    
    CREATE FUNCTION edit_distance_within(@s nvarchar(4000), @t nvarchar(4000), @d int)
    RETURNS int
    AS
    BEGIN
      DECLARE @sl int, @tl int, @i int, @j int, @sc nchar, @c int, @c1 int,
        @cv0 nvarchar(4000), @cv1 nvarchar(4000), @cmin int
      SELECT @sl = LEN(@s), @tl = LEN(@t), @cv1 = '', @j = 1, @i = 1, @c = 0
      WHILE @j <= @tl
        SELECT @cv1 = @cv1 + NCHAR(@j), @j = @j + 1
      WHILE @i <= @sl
      BEGIN
        SELECT @sc = SUBSTRING(@s, @i, 1), @c1 = @i, @c = @i, @cv0 = '', @j = 1, @cmin = 4000
        WHILE @j <= @tl
        BEGIN
          SET @c = @c + 1
          SET @c1 = @c1 - CASE WHEN @sc = SUBSTRING(@t, @j, 1) THEN 1 ELSE 0 END
          IF @c > @c1 SET @c = @c1
          SET @c1 = UNICODE(SUBSTRING(@cv1, @j, 1)) + 1
          IF @c > @c1 SET @c = @c1
          IF @c < @cmin SET @cmin = @c
          SELECT @cv0 = @cv0 + NCHAR(@c), @j = @j + 1
        END
        IF @cmin > @d BREAK
        SELECT @cv1 = @cv0, @i = @i + 1
      END
      RETURN CASE WHEN @cmin <= @d AND @c <= @d THEN @c ELSE -1 END
    END
    GO
    
    0 讨论(0)
提交回复
热议问题