Stanford Dependency Parser Setup and NLTK

后端 未结 2 1342
野的像风
野的像风 2020-12-16 06:25

So I got the \"standard\" Stanford Parser to work thanks to danger89\'s answers to this previous post, Stanford Parser and NLTK.

However, I am now trying to get the

相关标签:
2条回答
  • 2020-12-16 06:48

    The StanfordDependencyParser API is a new class object created since NLTK version 3.1.

    Ensure that you have the latest NLTK available either through pip

    pip install -U nltk
    

    or through your linux package manager, e.g.:

    sudo apt-get python-nltk
    

    or in windows, download https://pypi.python.org/pypi/nltk and install and it should overwrite your previous NLTK version.

    Then you can use the API as shown in the documentation:

    from nltk.parse.stanford import StanfordDependencyParser
    dep_parser=StanfordDependencyParser(model_path="edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz")
    print [parse.tree() for parse in dep_parser.raw_parse("The quick brown fox jumps over the lazy dog.")]
    

    [out]:

    [Tree('jumps', [Tree('fox', ['The', 'quick', 'brown']), Tree('dog', ['over', 'the', 'lazy'])])]
    

    (Note: Make sure you get your path to jar and os.environ correct, in Windows, it's something\\something\\some\\path, in unix it's something/something/some/path)

    See also https://github.com/nltk/nltk/wiki/Installing-Third-Party-Software#stanford-tagger-ner-tokenizer-and-parser and when you need a TL;DR solution, see https://github.com/alvations/nltk_cli

    0 讨论(0)
  • 2020-12-16 06:48

    If the only thing you changed was 'StanfordDependencyParser' and the error states: module' object has no attribute 'StanfordDependencyParser' I would assume that StanfordDependencyParser was the wrong thing to change it to. Have you copied all of the code that you linked to verbatim?

    0 讨论(0)
提交回复
热议问题