I need to classify words into their parts of speech. Like a verb, a noun, an adverb etc.. I used the
nltk.word_tokenize() #to identify word in a sentence nltk.
Below is my code:
chunks = ne_chunk(postags, binary=True) for c in chunks: if hasattr(c, 'node'): myNE.append(' '.join(i[0] for i in c.leaves()))