How to use a Lucene Analyzer to tokenize a String?

﹥>﹥吖頭↗ 提交于 2019-11-26 19:17:55

问题


Is there a simple way I could use any subclass of Lucene's Analyzer to parse/tokenize a String?

Something like:

String to_be_parsed = "car window seven";
Analyzer analyzer = new StandardAnalyzer(...);
List<String> tokenized_string = analyzer.analyze(to_be_parsed);

回答1:


As far as I know, you have to write the loop yourself. Something like this (taken straight from my source tree):

public final class LuceneUtils {

    public static List<String> parseKeywords(Analyzer analyzer, String field, String keywords) {

        List<String> result = new ArrayList<String>();
        TokenStream stream  = analyzer.tokenStream(field, new StringReader(keywords));

        try {
            while(stream.incrementToken()) {
                result.add(stream.getAttribute(TermAttribute.class).term());
            }
        }
        catch(IOException e) {
            // not thrown b/c we're using a string reader...
        }

        return result;
    }  
}



回答2:


Based off of the answer above, this is slightly modified to work with Lucene 4.0.

public final class LuceneUtil {

  private LuceneUtil() {}

  public static List<String> tokenizeString(Analyzer analyzer, String string) {
    List<String> result = new ArrayList<String>();
    try {
      TokenStream stream  = analyzer.tokenStream(null, new StringReader(string));
      stream.reset();
      while (stream.incrementToken()) {
        result.add(stream.getAttribute(CharTermAttribute.class).toString());
      }
    } catch (IOException e) {
      // not thrown b/c we're using a string reader...
      throw new RuntimeException(e);
    }
    return result;
  }

}



回答3:


Even better by using try-with-resources! This way you don't have to explicitly call .close() that is required in higher versions of the library.

public static List<String> tokenizeString(Analyzer analyzer, String string) {
  List<String> tokens = new ArrayList<>();
  try (TokenStream tokenStream  = analyzer.tokenStream(null, new StringReader(string))) {
    tokenStream.reset();  // required
    while (tokenStream.incrementToken()) {
      tokens.add(tokenStream.getAttribute(CharTermAttribute.class).toString());
    }
  } catch (IOException e) {
    new RuntimeException(e);  // Shouldn't happen...
  }
  return tokens;
}

And the Tokenizer version:

  try (Tokenizer standardTokenizer = new HMMChineseTokenizer()) {
    standardTokenizer.setReader(new StringReader("我说汉语说得很好"));
    standardTokenizer.reset();
    while(standardTokenizer.incrementToken()) {
      standardTokenizer.getAttribute(CharTermAttribute.class).toString());
    }
  } catch (IOException e) {
      new RuntimeException(e);  // Shouldn't happen...
  }


来源:https://stackoverflow.com/questions/6334692/how-to-use-a-lucene-analyzer-to-tokenize-a-string

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!