I\'d like to automatically apply n-gram tokenization on an entire Elasticsearch index.
The docs mention ultimately running an analysis to apply a tokenizer, but the analy