How to use a Lucene Analyzer to tokenize a String?

拥有回忆 提交于 2019-11-27 17:59:27

As far as I know, you have to write the loop yourself. Something like this (taken straight from my source tree):

public final class LuceneUtils {

    public static List<String> parseKeywords(Analyzer analyzer, String field, String keywords) {

        List<String> result = new ArrayList<String>();
        TokenStream stream  = analyzer.tokenStream(field, new StringReader(keywords));

        try {
            while(stream.incrementToken()) {
                result.add(stream.getAttribute(TermAttribute.class).term());
            }
        }
        catch(IOException e) {
            // not thrown b/c we're using a string reader...
        }

        return result;
    }  
}
Ben McCann

Based off of the answer above, this is slightly modified to work with Lucene 4.0.

public final class LuceneUtil {

  private LuceneUtil() {}

  public static List<String> tokenizeString(Analyzer analyzer, String string) {
    List<String> result = new ArrayList<String>();
    try {
      TokenStream stream  = analyzer.tokenStream(null, new StringReader(string));
      stream.reset();
      while (stream.incrementToken()) {
        result.add(stream.getAttribute(CharTermAttribute.class).toString());
      }
    } catch (IOException e) {
      // not thrown b/c we're using a string reader...
      throw new RuntimeException(e);
    }
    return result;
  }

}

Even better by using try-with-resources! This way you don't have to explicitly call .close() that is required in higher versions of the library.

public static List<String> tokenizeString(Analyzer analyzer, String string) {
  List<String> tokens = new ArrayList<>();
  try (TokenStream tokenStream  = analyzer.tokenStream(null, new StringReader(string))) {
    tokenStream.reset();  // required
    while (tokenStream.incrementToken()) {
      tokens.add(tokenStream.getAttribute(CharTermAttribute.class).toString());
    }
  } catch (IOException e) {
    new RuntimeException(e);  // Shouldn't happen...
  }
  return tokens;
}

And the Tokenizer version:

  try (Tokenizer standardTokenizer = new HMMChineseTokenizer()) {
    standardTokenizer.setReader(new StringReader("我说汉语说得很好"));
    standardTokenizer.reset();
    while(standardTokenizer.incrementToken()) {
      standardTokenizer.getAttribute(CharTermAttribute.class).toString());
    }
  } catch (IOException e) {
      new RuntimeException(e);  // Shouldn't happen...
  }
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!