Stemming some plurals with wordnet lemmatizer doesn't work

萝らか妹 提交于 2019-11-28 10:29:42

问题


Hi i've a problem with nltk (2.0.4): I'm trying to stemming the word 'men' or 'teeth' but it doesn't seem to work. Here's my code:

############################################################################
import nltk
from nltk.corpus import wordnet as wn
from nltk.stem.wordnet import WordNetLemmatizer

lmtzr=WordNetLemmatizer()
words_raw = "men teeth"
words = nltk.word_tokenize(words_raw)
for word in words:
        print 'WordNet Lemmatizer NOUN: ' + lmtzr.lemmatize(word, wn.NOUN)
#############################################################################

This should print 'man' and 'tooth' but instead it prints 'men' and 'teeth'.

any solutions?


回答1:


I found the solution! I checked the files in wordnet.py the folder /usr/local/lib/python2.6/dist-packages/nltk/corpus/reader and i noticed that the function _morphy(self,form,pos) returns a list containing stemmed words. So i tried to test _morphy :

import nltk
from nltk.corpus import wordnet as wn
from nltk.stem.wordnet import WordNetLemmatizer

words_raw = "men teeth books"
words = nltk.word_tokenize(words_raw)
for word in words:
        print wn._morphy(word, wn.NOUN)

This program prints [men,man], [teeth,tooth] and [book]!

the explanation of why lmtzr.lemmatize () prints only the first element of the list, perhaps it can be found in the function lemmatize, contained in the file 'wordnet.py' which is in the folder /usr/local/lib/python2.6/dist-packages/nltk/stem.

def lemmatize(self, word, pos=NOUN):
    lemmas = wordnet._morphy(word, pos)
    return min(lemmas, key=len) if lemmas else word

I assume that it returns only the shorter word contained in the word list, and if the two words are of equal length it returns the first one; for example 'men' or 'teeth'rather than 'man' and 'tooth'




回答2:


There is nothing wrong with the wordnetlemmatizer per se but it just can't handle irregular words well enough. You could try this 'hack' and try to find the closest lemma_names for the synset:

>>> from nltk.stem import WordNetLemmatizer
>>> wnl = WordNetLemmatizer()
>>> word = "teeth"
>>> wnl.lemmatize(word)
'teeth'
>>> wnlemmas = list(set(list(chain(*[i.lemma_names() for i in wordnet.synsets('teeth')]))))
>>> from difflib import get_close_matches as gcm
>>> [i for i in gcm(word,wnlemmas) if i != word]
[u'tooth']

>>> word = 'men'
>>> wnlemmas = list(set(list(chain(*[i.lemma_names() for i in wordnet.synsets(word)]))))
>>> gcm(word,wnlemmas)
[u'men', u'man']
>>> [i for i in gcm(word,wnlemmas) if i != word]
[u'man']

However the fact that wordnet.synsets('men') can fetch the right synset and WordNetLemmatizer().lemmatize('men') can't suggest that there is also something missing from the WordNetLemmatizer code.


To extend the exception list, see also: Python NLTK Lemmatization of the word 'further' with wordnet



来源:https://stackoverflow.com/questions/22333392/stemming-some-plurals-with-wordnet-lemmatizer-doesnt-work

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!