Tokenizing unicode using nltk

冷暖自知 提交于 2019-11-29 11:20:00

问题


I have textfiles that use utf-8 encoding that contain characters like 'ö', 'ü', etc. I would like to parse the text form these files, but I can't get the tokenizer to work properly. If I use standard nltk tokenizer:

f = open('C:\Python26\text.txt', 'r') # text = 'müsli pöök rääk'
text = f.read()
f.close
items = text.decode('utf8')
a = nltk.word_tokenize(items)

Output: [u'\ufeff', u'm', u'\xfc', u'sli', u'p', u'\xf6', u'\xf6', u'k', u'r', u'\xe4', u'\xe4', u'k']

Punkt tokenizer seems to do better:

f = open('C:\Python26\text.txt', 'r') # text = 'müsli pöök rääk'
text = f.read()
f.close
items = text.decode('utf8')
a = PunktWordTokenizer().tokenize(items)

output: [u'\ufeffm\xfcsli', u'p\xf6\xf6k', u'r\xe4\xe4k']

There is still '\ufeff' before the first token that i can't figure out (not that I can't remove it). What am I doing wrong? Help greatly appreciated.


回答1:


It's more likely that the \uFEFF char is part of the content read from the file. I doubt it was inserted by the tokeniser. \uFEFF at the beginning of a file is a deprecated form of Byte Order Mark. If it appears anywhere else, then it is treated as a zero width non-break space.

Was the file written by Microsoft Notepad? From the codecs module docs:

To increase the reliability with which a UTF-8 encoding can be detected, Microsoft invented a variant of UTF-8 (that Python 2.5 calls "utf-8-sig") for its Notepad program: Before any of the Unicode characters is written to the file, a UTF-8 encoded BOM (which looks like this as a byte sequence: 0xef, 0xbb, 0xbf) is written.

Try reading your file using codecs.open() instead. Note the "utf-8-sig" encoding which consumes the BOM.

import codecs
f = codecs.open('C:\Python26\text.txt', 'r', 'utf-8-sig')
text = f.read()
a = nltk.word_tokenize(text)

Experiment:

>>> open("x.txt", "r").read().decode("utf-8")
u'\ufeffm\xfcsli'
>>> import codecs
>>> codecs.open("x.txt", "r", "utf-8-sig").read()
u'm\xfcsli'
>>> 



回答2:


You should make sure that you're passing unicode strings to nltk tokenizers. I get the following identical tokenizations of your string with both tokenizers on my end:

import nltk
nltk.wordpunct_tokenize('müsli pöök rääk'.decode('utf8'))
# output : [u'm\xfcsli', u'p\xf6\xf6k', u'r\xe4\xe4k']

nltk.word_tokenize('müsli pöök rääk'.decode('utf8'))
# output: [u'm\xfcsli', u'p\xf6\xf6k', u'r\xe4\xe4k']



回答3:


the UFEE code is a "ZERO WIDTH NO-BREAK SPACE" character and this is not consider as a space by the re module, so the PunktWordTokenizer() which use the regex r'\w+|[^\w\s]+' with unicode and dotall flags recognize this character as a word. If you don't want to remove the character manually, you could use the following tokenizer:

nltk.RegexpTokenizer(u'\w+|[^\w\s\ufeff]+')


来源:https://stackoverflow.com/questions/9228202/tokenizing-unicode-using-nltk

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!