Simple spell checking algorithm

后端 未结 4 1183
星月不相逢
星月不相逢 2021-02-01 10:21

I\'ve been tasked with creating a simple spell checker for an assignment but have given next to no guidance so was wondering if anyone could help me out. I\'m not after someone

相关标签:
4条回答
  • 2021-02-01 10:53

    for spell checker many data structures would be useful for example BK-Tree. Check Damn Cool Algorithms, Part 1: BK-Trees I have done implementation for the same here

    My Earlier code link may be misleading, this one is correct for spelling corrector.

    0 讨论(0)
  • 2021-02-01 10:55

    The simpler way to solve the problem is indeed a precomputed map [bad word] -> [suggestions].

    The problem is that while the removal of a letter creates few "bad words", for the addition or substitution you have many candidates.

    So I would suggest another solution ;)

    Note: the edit distance you are describing is called the Levenshtein Distance

    The solution is described in incremental step, normally the search speed should continuously improve at each idea and I have tried to organize them with the simpler ideas (in term of implementation) first. Feel free to stop whenever you're comfortable with the results.


    0. Preliminary

    • Implement the Levenshtein Distance algorithm
    • Store the dictionnary in a sorted sequence (std::set for example, though a sorted std::deque or std::vector would be better performance wise)

    Keys Points:

    • The Levenshtein Distance compututation uses an array, at each step the next row is computed solely with the previous row
    • The minimum distance in a row is always superior (or equal) to the minimum in the previous row

    The latter property allow a short-circuit implementation: if you want to limit yourself to 2 errors (treshold), then whenever the minimum of the current row is superior to 2, you can abandon the computation. A simple strategy is to return the treshold + 1 as the distance.


    1. First Tentative

    Let's begin simple.

    We'll implement a linear scan: for each word we compute the distance (short-circuited) and we list those words which achieved the smaller distance so far.

    It works very well on smallish dictionaries.


    2. Improving the data structure

    The levenshtein distance is at least equal to the difference of length.

    By using as a key the couple (length, word) instead of just word, you can restrict your search to the range of length [length - edit, length + edit] and greatly reduce the search space.


    3. Prefixes and pruning

    To improve on this, we can remark than when we build the distance matrix, row by row, one world is entirely scanned (the word we look for) but the other (the referent) is not: we only use one letter for each row.

    This very important property means that for two referents that share the same initial sequence (prefix), then the first rows of the matrix will be identical.

    Remember that I ask you to store the dictionnary sorted ? It means that words that share the same prefix are adjacent.

    Suppose that you are checking your word against cartoon and at car you realize it does not work (the distance is already too long), then any word beginning by car won't work either, you can skip words as long as they begin by car.

    The skip itself can be done either linearly or with a search (find the first word that has a higher prefix than car):

    • linear works best if the prefix is long (few words to skip)
    • binary search works best for short prefix (many words to skip)

    How long is "long" depends on your dictionary and you'll have to measure. I would go with the binary search to begin with.

    Note: the length partitioning works against the prefix partitioning, but it prunes much more of the search space


    4. Prefixes and re-use

    Now, we'll also try to re-use the computation as much as possible (and not just the "it does not work" result)

    Suppose that you have two words:

    • cartoon
    • carwash

    You first compute the matrix, row by row, for cartoon. Then when reading carwash you need to determine the length of the common prefix (here car) and you can keep the first 4 rows of the matrix (corresponding to void, c, a, r).

    Therefore, when begin to computing carwash, you in fact begin iterating at w.

    To do this, simply use an array allocated straight at the beginning of your search, and make it large enough to accommodate the larger reference (you should know what is the largest length in your dictionary).


    5. Using a "better" data structure

    To have an easier time working with prefixes, you could use a Trie or a Patricia Tree to store the dictionary. However it's not a STL data structure and you would need to augment it to store in each subtree the range of words length that are stored so you'll have to make your own implementation. It's not as easy as it seems because there are memory explosion issues which can kill locality.

    This is a last resort option. It's costly to implement.

    0 讨论(0)
  • 2021-02-01 11:19

    You should have a look at this explanation of Peter Norvig on how to write a spelling corrector .

    How to write a spelling corrector

    EveryThing is well explain in this article, as an example the python code for the spell checker looks like this :

    import re, collections
    
    def words(text): return re.findall('[a-z]+', text.lower()) 
    
    def train(features):
        model = collections.defaultdict(lambda: 1)
        for f in features:
            model[f] += 1
        return model
    
    NWORDS = train(words(file('big.txt').read()))
    
    alphabet = 'abcdefghijklmnopqrstuvwxyz'
    
    def edits1(word):
       splits     = [(word[:i], word[i:]) for i in range(len(word) + 1)]
       deletes    = [a + b[1:] for a, b in splits if b]
       transposes = [a + b[1] + b[0] + b[2:] for a, b in splits if len(b)>1]
       replaces   = [a + c + b[1:] for a, b in splits for c in alphabet if b]
       inserts    = [a + c + b     for a, b in splits for c in alphabet]
       return set(deletes + transposes + replaces + inserts)
    
    def known_edits2(word):
        return set(e2 for e1 in edits1(word) for e2 in edits1(e1) if e2 in NWORDS)
    
    def known(words): return set(w for w in words if w in NWORDS)
    
    def correct(word):
        candidates = known([word]) or known(edits1(word)) or known_edits2(word) or [word]
        return max(candidates, key=NWORDS.get)
    

    Hope you can find what you need on Peter Norvig website.

    0 讨论(0)
  • 2021-02-01 11:20

    off the top of my head, you could split up your suggestions based on length and build a tree structure where children are longer variations of the shorter parent.

    should be quite fast but i'm not sure about the code itself, i'm not very well-versed in c++

    0 讨论(0)
提交回复
热议问题