Elasticsearch using NEST: How to configure analyzers to find partial words?

后端 未结 1 2053
长情又很酷
长情又很酷 2021-01-06 05:40

I am trying to make a search by partial word, ignoring casing and ignoring the accentuation of some letters. Is it possible? I think ngram with default tokenizer should do t

相关标签:
1条回答
  • 2021-01-06 06:15

    Short Answer

    I think what you're looking for is a fuzzy query, which uses the Levenshtein distance algorithm to match similar words.

    Long Answer on nGrams

    The nGram filter splits the text into many smaller tokens based on the defined min/max range.

    For example, from your 'music' query the filter will generate: 'mu', 'us', 'si', 'ic', 'mus', 'usi', 'sic', 'musi', 'usic', and 'music'

    As you can see musiic does not match any of these nGram tokens.

    Why nGrams

    One benefit of nGrams is that it makes wildcard queries significantly faster because all potential substrings are pre-generated and indexed at insert time (I have seen queries speed up from multi-seconds to 15 milliseconds using nGrams).

    Without the nGrams, each string must be searched at query time for a match [O(n^2)] instead of directly looked up in the index [O(1)]. As pseudocode:

    hits = []
    foreach string in index:
        if string.substring(query):
            hits.add(string)
    return hits
    

    vs

    return index[query]
    

    Note that this comes at the expense of making inserts slower, requiring more storage, and heavier memory usage.

    0 讨论(0)
提交回复
热议问题