问题
I have the following code for taking a word from the input text file and printing the synonyms, definitions and example sentences for the word using WordNet. It separates the synonyms from the synset based on the part-of-speech, i.e., the synonyms that are verbs and the synonyms that are adjectives are printed separately.
Example for the word flabbergasted the synonyms are 1) flabbergast , boggle , bowl over which are verbs and 2)dumbfounded , dumfounded , flabbergasted , stupefied , thunderstruck , dumbstruck , dumbstricken which are adjectives.
How do I print the part-of-speech along with the synonyms? I have provided the code I have so far below:
import nltk
from nltk.corpus import wordnet as wn
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
fp = open('sample.txt','r')
data = fp.read()
tokens= nltk.wordpunct_tokenize(data)
text = nltk.Text(tokens)
words = [w.lower() for w in text]
for a in words:
print a
syns = wn.synsets(a)
for s in syns:
print
print "definition:" s.definition
print "synonyms:"
for l in s.lemmas:
print l.name
print "examples:"
for b in s.examples:
print b
print
回答1:
Looks like you messed up your indentation:
for a in words:
print a
syns = wn.synsets(a)
Seems like syns = wn.synsets(a)
should be inside the words
for loop so you can do this for every word:
for w in words:
print w
syns = wn.synsets(w)
for s in syns:
print
print "definition:", s.definition
print "synonyms:"
for l in s.lemmas:
print l.name
print "examples:"
for b in s.examples:
print b
print
回答2:
A lemma has a synset
attribute, which has its own part of speech in its pos
attribute. So, if we have a lemma as l
, we can access its part of spech like this:
>>> l = Lemma('gladden.v.01.joy')
>>> l.synset.pos
'v'
More generally, we can extend this into a loop to read through your file. I'm using the with
statement because it closes files nicely once the loop is completed.
>>> with open('sample.txt') as f:
... raw = f.read()
... for sentence in nltk.sent_tokenize(raw):
... sentence = nltk.wordpunct_tokenize(sentence)
... for word in sentence:
... for synset in wn.synsets(word):
... for lemma in synset.lemmas:
... print lemma.name, lemma.synset.pos
...
If you want to make sure that you are only choosing lemmas with the same part of speech as the word that you are currently talking about, then you will need to identify that word's part of speech too:
>>> import nltk
>>> from nltk.corpus import wordnet as wn
>>> with open('sample.txt') as f:
... raw = f.read()
... for sentence in nltk.sent_tokenize(raw):
... sentence = nltk.pos_tag(nltk.wordpunct_tokenize(sentence))
... for word, pos in sentence:
... print word, pos
I'll leave reconciling these two as an exercise for the reader.
回答3:
Simply call pos()
on a synset. To list all the POS for a lemma:
>>> from nltk.corpus import wordnet as wn
>>> syns = wn.synsets('dog')
>>> set([x.pos() for x in syns])
{'n', 'v'}
Unfortunately this doesn't seem to be documented anywhere except the source code, which shows other methods that can be called on a synset.
Synset attributes, accessible via methods with the same name:
name
: The canonical name of this synset, formed using the first lemma of this synset. Note that this may be different from the name passed to the constructor if that string used a different lemma to
identify the synset.pos
: The synset's part of speech, matching one of the module level attributes ADJ, ADJ_SAT, ADV, NOUN or VERB.lemmas
: A list of the Lemma objects for this synset.definition
: The definition for this synset.examples
: A list of example strings for this synset.offset
: The offset in the WordNet dict file of this synset.lexname
: The name of the lexicographer file containing this synset.
来源:https://stackoverflow.com/questions/5966773/printing-the-part-of-speech-along-with-the-synonyms-of-the-word