ElasticSearch n-gram tokenfilter not finding partial words
问题 I have been playing around with ElasticSearch for a new project of mine. I have set the default analyzers to use the ngram tokenfilter. This is my elasticsearch.yml file: index: analysis: analyzer: default_index: tokenizer: standard filter: [standard, stop, mynGram] default_search: tokenizer: standard filter: [standard, stop] filter: mynGram: type: nGram min_gram: 1 max_gram: 10 I created a new index and added the following document to it: $ curl -XPUT http://localhost:9200/test/newtype/3 -d