How to avoid NLTK's sentence tokenizer splitting on abbreviations?

喜你入骨 提交于 2019-11-29 03:58:33

I think lower case for u.s.a in abbreviations list will work fine for you Try this,

from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters
punkt_param = PunktParameters()
abbreviation = ['u.s.a', 'fig']
punkt_param.abbrev_types = set(abbreviation)
tokenizer = PunktSentenceTokenizer(punkt_param)
tokenizer.tokenize('Fig. 2 shows a U.S.A. map.')

It returns this to me:

['Fig. 2 shows a U.S.A. map.']
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!