Querying part-of-speech tags with Lucene 7 OpenNLP

故事扮演 提交于 2021-02-20 03:50:40

问题


For fun and learning I am trying to build a part-of-speech (POS) tagger with OpenNLP and Lucene 7.4. The goal would be that once indexed I can actually search for a sequence of POS tags and find all sentences that match sequence. I already get the indexing part, but I am stuck on the query part. I am aware that SolR might have some functionality for this, and I already checked the code (which was not so self-expalantory after all). But my goal is to understand and implement in Lucene 7, not in SolR, as I want to be independent of any search engine on top.

Idea Input sentence 1: The quick brown fox jumped over the lazy dogs. Applied Lucene OpenNLP tokenizer results in: [The][quick][brown][fox][jumped][over][the][lazy][dogs][.] Next, applying Lucene OpenNLP POS tagging results in: [DT][JJ][JJ][NN][VBD][IN][DT][JJ][NNS][.]

Input sentence 2: Give it to me, baby! Applied Lucene OpenNLP tokenizer results in: [Give][it][to][me][,][baby][!] Next, applying Lucene OpenNLP POS tagging results in: [VB][PRP][TO][PRP][,][UH][.]

Query: JJ NN VBD matches part of sentence 1, so sentence 1 should be returned. (At this point I am only interested in exact matches, i.e. let's leave aside partial matches, wildcards etc.)

Indexing First, I created my own class com.example.OpenNLPAnalyzer:

public class OpenNLPAnalyzer extends Analyzer {
  protected TokenStreamComponents createComponents(String fieldName) {
    try {

        ResourceLoader resourceLoader = new ClasspathResourceLoader(ClassLoader.getSystemClassLoader());


        TokenizerModel tokenizerModel = OpenNLPOpsFactory.getTokenizerModel("en-token.bin", resourceLoader);
        NLPTokenizerOp tokenizerOp = new NLPTokenizerOp(tokenizerModel);


        SentenceModel sentenceModel = OpenNLPOpsFactory.getSentenceModel("en-sent.bin", resourceLoader);
        NLPSentenceDetectorOp sentenceDetectorOp = new NLPSentenceDetectorOp(sentenceModel);

        Tokenizer source = new OpenNLPTokenizer(
                AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY, sentenceDetectorOp, tokenizerOp);

        POSModel posModel = OpenNLPOpsFactory.getPOSTaggerModel("en-pos-maxent.bin", resourceLoader);
        NLPPOSTaggerOp posTaggerOp = new NLPPOSTaggerOp(posModel);

        // Perhaps we should also use a lower-case filter here?

        TokenFilter posFilter = new OpenNLPPOSFilter(source, posTaggerOp);

        // Very important: Tokens are not indexed, we need a store them as payloads otherwise we cannot search on them
        TypeAsPayloadTokenFilter payloadFilter = new TypeAsPayloadTokenFilter(posFilter);

        return new TokenStreamComponents(source, payloadFilter);
    }
    catch (IOException e) {
        throw new RuntimeException(e.getMessage());
    }              

}

Note that we are using a TypeAsPayloadTokenFilter wrapped around OpenNLPPOSFilter. This means, our POS tags will be indexed as payloads, and our query - however it'll look like - will have to search on payloads as well.

Querying This is where I am stuck. I have no clue how to query on payloads, and whatever I try does not work. Note that I am using Lucene 7, it seems that in older versions querying on payload has changed several times. Documentation is extremely scarce. It's not even clear what the proper field name is now to query - is it "word" or "type" or anything else? For example, I tried this code which does not return any search results:

    // Step 1: Indexing
    final String body = "The quick brown fox jumped over the lazy dogs.";
    Directory index = new RAMDirectory();
    OpenNLPAnalyzer analyzer = new OpenNLPAnalyzer();
    IndexWriterConfig indexWriterConfig = new IndexWriterConfig(analyzer);
    IndexWriter writer = new IndexWriter(index, indexWriterConfig);
    Document document = new Document();
    document.add(new TextField("body", body, Field.Store.YES));
    writer.addDocument(document);
    writer.close();


    // Step 2: Querying
    final int topN = 10;
    DirectoryReader reader = DirectoryReader.open(index);
    IndexSearcher searcher = new IndexSearcher(reader);

    final String fieldName = "body"; // What is the correct field name here? "body", or "type", or "word" or anything else?
    final String queryText = "JJ";
    Term term = new Term(fieldName, queryText);
    SpanQuery match = new SpanTermQuery(term);
    BytesRef pay = new BytesRef("type"); // Don't understand what to put here as an argument
    SpanPayloadCheckQuery query = new SpanPayloadCheckQuery(match, Collections.singletonList(pay));

    System.out.println(query.toString());

    TopDocs topDocs = searcher.search(query, topN);

Any help is very much appreciated here.


回答1:


Why don't you use TypeAsSynonymFilter instead of TypeAsPayloadTokenFilter and just make a normal query. So in your Analyzer:

:
TokenFilter posFilter = new OpenNLPPOSFilter(source, posTaggerOp);
TypeAsSynonymFilter typeAsSynonymFilter = new TypeAsSynonymFilter(posFilter);
return new TokenStreamComponents(source, typeAsSynonymFilter);

And indexing side:

static Directory index() throws Exception {
  Directory index = new RAMDirectory();
  OpenNLPAnalyzer analyzer = new OpenNLPAnalyzer();
  IndexWriterConfig indexWriterConfig = new IndexWriterConfig(analyzer);
  IndexWriter writer = new IndexWriter(index, indexWriterConfig);
  writer.addDocument(doc("The quick brown fox jumped over the lazy dogs."));
  writer.addDocument(doc("Give it to me, baby!"));
  writer.close();

  return index;
}

static Document doc(String body){
  Document document = new Document();
  document.add(new TextField(FIELD, body, Field.Store.YES));
  return document;
}

And searching side:

static void search(Directory index, String searchPhrase) throws Exception {
  final int topN = 10;
  DirectoryReader reader = DirectoryReader.open(index);
  IndexSearcher searcher = new IndexSearcher(reader);

  QueryParser parser = new QueryParser(FIELD, new WhitespaceAnalyzer());
  Query query = parser.parse(searchPhrase);
  System.out.println(query);

  TopDocs topDocs = searcher.search(query, topN);
  System.out.printf("%s => %d hits\n", searchPhrase, topDocs.totalHits);
  for(ScoreDoc scoreDoc: topDocs.scoreDocs){
    Document doc = searcher.doc(scoreDoc.doc);
    System.out.printf("\t%s\n", doc.get(FIELD));
  }
}

And then use them like this:

public static void main(String[] args) throws Exception {
  Directory index = index();
  search(index, "\"JJ NN VBD\"");    // search the sequence of POS tags
  search(index, "\"brown fox\"");    // search a phrase
  search(index, "\"fox brown\"");    // search a phrase (no hits)
  search(index, "baby");             // search a word
  search(index, "\"TO PRP\"");       // search the sequence of POS tags
}

The result looks like this:

body:"JJ NN VBD"
"JJ NN VBD" => 1 hits
    The quick brown fox jumped over the lazy dogs.
body:"brown fox"
"brown fox" => 1 hits
    The quick brown fox jumped over the lazy dogs.
body:"fox brown"
"fox brown" => 0 hits
body:baby
baby => 1 hits
    Give it to me, baby!
body:"TO PRP"
"TO PRP" => 1 hits
    Give it to me, baby!


来源:https://stackoverflow.com/questions/52353452/querying-part-of-speech-tags-with-lucene-7-opennlp

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!