RegEx Tokenizer to split a text into words, digits and punctuation marks

早过忘川 提交于 2019-12-02 03:27:19

问题


What I want to do is to split a text into his ultimate elements.

For example:

from nltk.tokenize import *
txt = "A sample sentences with digits like 2.119,99 or 2,99 are awesome."
regexp_tokenize(txt, pattern='(?:(?!\d)\w)+|\S+')
['A','sample','sentences','with','digits','like','2.199,99','or','2,99','are','awesome','.']

You can see it works fine. My Problem is: What happens if the digit is at the end of a text?

txt = "Today it's 07.May 2011. Or 2.999."
regexp_tokenize(txt, pattern='(?:(?!\d)\w)+|\S+') 
['Today', 'it', "'s", '07.May', '2011.', 'Or', '2.999.'] 

The result should be: ['Today', 'it', "'s", '07.May', '2011','.', 'Or', '2.999','.']

What I have to do to get the result above?


回答1:


I created a pattern to try to include periods and commas occurring inside words, numbers. Hope this helps:

txt = "Today it's 07.May 2011. Or 2.999."
regexp_tokenize(txt, pattern=r'\w+([.,]\w+)*|\S+')
['Today', 'it', "'s", '07.May', '2011', '.', 'Or', '2.999', '.']


来源:https://stackoverflow.com/questions/5214177/regex-tokenizer-to-split-a-text-into-words-digits-and-punctuation-marks

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!