问题
I have some text in French that I need to process in some ways. For that, I need to:
- First, tokenize the text into words
- Then lemmatize those words to avoid processing the same root more than once
As far as I can see, the wordnet lemmatizer in the NLTK only works with English. I want something that can return "vouloir" when I give it "voudrais" and so on. I also cannot tokenize properly because of the apostrophes. Any pointers would be greatly appreciated. :)
回答1:
Here's an old but relevant comment by an nltk dev. Looks like most advanced stemmers in nltk are all English specific:
The nltk.stem module currently contains 3 stemmers: the Porter stemmer, the Lancaster stemmer, and a Regular-Expression based stemmer. The Porter stemmer and Lancaster stemmer are both English- specific. The regular-expression based stemmer can be customized to use any regular expression you wish. So you should be able to write a simple stemmer for non-English languages using the regexp stemmer. For example, for french:
from nltk import stem stemmer = stem.Regexp('s$|es$|era$|erez$|ions$| <etc> ')
But you'd need to come up with the language-specific regular expression yourself. For a more advanced stemmer, it would probably be necessary to add a new module. (This might be a good student project.)
For more information on the regexp stemmer:
http://nltk.org/doc/api/nltk.stem.regexp.Regexp-class.html
-Edward
Note: the link he gives is dead, see here for the current regexstemmer documentation.
The more recently added snowball stemmer appears to be able to stem French though. Let's put it to the test:
>>> from nltk.stem.snowball import FrenchStemmer
>>> stemmer = FrenchStemmer()
>>> stemmer.stem('voudrais')
u'voudr'
>>> stemmer.stem('animaux')
u'animal'
>>> stemmer.stem('yeux')
u'yeux'
>>> stemmer.stem('dors')
u'dor'
>>> stemmer.stem('couvre')
u'couvr'
As you can see, some results are a bit dubious.
Not quite what you were hoping for, but I guess it's a start.
回答2:
The best solution I found is spacy, it seems to do the job
To install:
pip3 install spacy
python3 -m spacy download fr_core_news_md
To use:
import spacy
nlp = spacy.load('fr_core_news_md')
doc = nlp(u"voudrais non animaux yeux dors couvre.")
for token in doc:
print(token, token.lemma_)
Result:
voudrais vouloir
non non
animaux animal
yeux oeil
dors dor
couvre couvrir
checkout the documentation for more details: https://spacy.io/models/fr && https://spacy.io/usage
回答3:
Maybe with TreeTagger ? I haven't try but this app can work in french
http://www.cis.uni-muenchen.de/~schmid/tools/TreeTagger/
http://txm.sourceforge.net/installtreetagger_fr.html
回答4:
If you are performing Machine Learning algorithms on your text, you may use n-grams instead of word tokens. It is not strictly lemmatization but it detects series of n similar letters and it is supprisingly powerful to gather words with the same meaning.
I use sklearn's function CountVectorizer(analyzer='char_wb')
and for some specific text, it is way more efficient than bag of words.
回答5:
If you are doing a text mining project in a French bank, I recommend the package cltk.
install cltk
from cltk.lemmatize.french.lemma import LemmaReplacer
more details in cltk
来源:https://stackoverflow.com/questions/13131139/lemmatize-french-text