Token Chars Mapping to Ngram Filter ElasticSearch NEST
问题 I'm trying to replicate the below mappings using NEST and facing an issue while mapping the token chars to the tokenizer. { "settings": { "analysis": { "filter": { "nGram_filter": { "type": "nGram", "min_gram": 2, "max_gram": 20, "token_chars": [ "letter", "digit", "punctuation", "symbol" ] } }, "analyzer": { "nGram_analyzer": { "type": "custom", "tokenizer": "whitespace", "filter": [ "lowercase", "asciifolding", "nGram_filter" ] } } } } I was able to replicate everything except the token