python nltk keyword extraction from sentence

后端 未结 3 1037
情话喂你
情话喂你 2021-02-10 03:09

\"First thing we do, let\'s kill all the lawyers.\" - William Shakespeare

Given the quote above, I would like to pull out \"k

相关标签:
3条回答
  • 2021-02-10 03:42

    One simple approach would be to keep stop word lists for NN, VB etc. These would be high frequency words that usually don't add much semantic content to a sentence.

    The snippet below shows distinct lists for each type of word token, but you could just as well employ a single stop word list for both verbs and nouns (such as this one).

    stop_words = dict(
        NNP=['first', 'second'],
        NN=['thing'],
        VBP=['do','done'],
        VB=[],
        NNS=['lets', 'things'],
    )
    
    
    def filter_stop_words(pos_list):
        return [[token, token_type] 
                for token, token_type in pos_list 
                if token.lower() not in stop_words[token_type]]
    
    0 讨论(0)
  • 2021-02-10 03:45

    I don't think theres any perfect answer to this question because there aren't any gold-set of input/output mappings which everybody will agree upon. You think the most important words for that sentence are ('kill', 'lawyers'), someone else might argue the correct answer should be ('first', 'kill', 'lawyers'). If you are able to very precisely and completely unambiguously describe exactly what you want your system to do, your problem will be more than half solved.

    Until then, I can suggest some additional heuristics to help you get what you want.
    Build an idf dictionary using your data, i.e. build a mapping from every word to a number that correlates with how rare that word is. Bonus points for doing it for larger n-grams as well.

    By combining the idf values of each word in your input sentence along with their POS tags, you answer questions of the form 'What is the rarest verb in this sentence?', 'What is the rarest noun in this sentence', etc. In any reasonable corpus, 'kill' should be rarer than 'do', and 'lawyers' rarer than 'thing', so maybe trying to find the rarest noun and rarest verb in a sentence and returning just those two will do the trick for most of your intended use cases. If not, you can always make your algorithm a little more complicated and see if that seems to do the job better.

    Ways to expand this include trying to identify larger phrases using n-gram idf's, building a full parse-tree of the sentence (using maybe the stanford parser) and identifying some pattern within these trees to help you figure out which parts of the tree do important things tend to be based, etc.

    0 讨论(0)
  • 2021-02-10 03:54

    in your case, you can simply use Rake (thanks to Fabian) package for python to get what you need:

    >>> path = #your path 
    >>> r = RAKE.Rake(path)
    >>> r.run("First thing we do, let's kill all the lawyers")
    [('lawyers', 1.0), ('kill', 1.0), ('thing', 1.0)]
    

    the path can be for example this file.

    but in general, you better to use NLTK package for the NLP usages

    0 讨论(0)
提交回复
热议问题