Solr has the built-in \"Analysis Screen\", which helps to debug the interplay between tokenizers and filters for specific field types:
There is one standalone tool called elyzer made by the nice folks at OpenSource Connections. That tool will show you the state of your tokens at any step (char filter, tokenizer, token filter) of the analysis process and it is very simple to use.
Installing it is very simple via pip install elyzer
and then you can use it as a command-line tool, e.g.
$ elyzer --es "http://localhost:9200" --index tmdb --analyzer english_bigrams --text "Mary had a little lamb"
TOKENIZER: standard
{1:Mary} {2:had} {3:a} {4:little} {5:lamb}
TOKEN_FILTER: standard
{1:Mary} {2:had} {3:a} {4:little} {5:lamb}
TOKEN_FILTER: lowercase
{1:mary} {2:had} {3:a} {4:little} {5:lamb}
TOKEN_FILTER: porter_stem
{1:mari} {2:had} {3:a} {4:littl} {5:lamb}
TOKEN_FILTER: bigram_filter
{1:mari had} {2:had a} {3:a littl} {4:littl lamb}
Yes ,We can do it by Elasticsearch - kopf.Elastic Search-KOPF is administrator Tools. U will type this command in you command prompt
bin/plugin --install lmenezes/elasticsearch-kopf/1.1
please let me know, if you have any doubt?
I've used Inquisitor in the past to test out tokenizers and filters. It sits on top of the Elasticsearch analyze API and can be used from a web front end.
You should also try another plugin called elasticsearch-extended-analyze which returns the same token-level information as the Solr analysis page (though without the web front end).
Analyze API can be used to test the analyzers. It is not so pretty but does the job.
Example
GET localhost:9200/_analyze
{
"tokenizer" : "keyword",
"token_filters" : ["lowercase"],
"char_filters" : ["html_strip"],
"text" : "this is a <b>test</b>"
}