问题
I am looking for a class or method that takes a long string of many 100s of words and tokenizes, removes the stop words and stems for use in an IR system.
For example:
"The big fat cat, said 'your funniest guy i know' to the kangaroo..."
the tokenizer would remove the punctuation and return an ArrayList
of words
the stop word remover would remove words like "the", "to", etc
the stemmer would reduce each word the their 'root', for example 'funniest' would become funny
Many thanks in advance.
回答1:
AFAIK Lucene can do what you want. With StandardAnalyzer
and StopAnalyzer
you can to the stop word removal. In combination with the Lucene contrib-snowball
(which includes work from Snowball) project you can do the stemming too.
But for stemming also consider this answer to: Stemming algorithm that produces real words
回答2:
These are standard requirements in Natural Language Processing so I would look in such toolkits. Since you require Java I'd start with OpenNLP: http://opennlp.sourceforge.net/
If you can look at other languages there is also NLTK (Python)
Note that "your funniest guy i know" is not standard syntax and this makes it harder to process than "You're the funniest guy I know". Not impossible, but much harder. I don't know of any system that would equate "your" to "you are".
回答3:
I have dealt with the issue on a number of tasks I have worked with, so let me give a tokenizer suggestion. As I do not see it given directly as an answer, I often use edu.northwestern.at.utils.corpuslinguistics.tokenizer.*
as my family of tokenizers. I see a number of cases where I used the PennTreebankTokenizer
class. Here is how you use it:
WordTokenizer wordTokenizer = new PennTreebankTokenizer();
List<String> words = wordTokenizer.extractWords(text);
The link to this work is here. Just a disclaimer, I have no affiliation with Northwestern, the group, or the work they do. I am just someone who uses the code occasionally.
回答4:
Here is comprehensive list of NLP tools. Sometime it makes sense to create these yourself as they will be lighter and you would have more control to the inner workings: use simple regular expression for tokenizations. For stop words just push the list below or some other list to a HashSet:
common-english-words.txt
Here is one of many Java implementation of porter stemer).
来源:https://stackoverflow.com/questions/1664489/tokenizer-stop-word-removal-stemming-in-java