Tokenizer vs token filters
问题 I'm trying to implement autocomplete using Elasticsearch thinking that I understand how to do it... I'm trying to build multi-word (phrase) suggestions by using ES's edge_n_grams while indexing crawled data. What is the difference between a tokenizer and a token_filter - I've read the docs on these but still need more understanding on them.... For instance is a token_filter what ES uses to search against user input? Is a tokenizer what ES uses to make tokens? What is a token? Is it possible