What I want to do is to split a text into his ultimate elements.
For example:
from nltk.tokenize import *
txt = "A sample sentences with digits like 2.119,99 or 2,99 are awesome."
regexp_tokenize(txt, pattern='(?:(?!\d)\w)+|\S+')
['A','sample','sentences','with','digits','like','2.199,99','or','2,99','are','awesome','.']
You can see it works fine. My Problem is: What happens if the digit is at the end of a text?
txt = "Today it's 07.May 2011. Or 2.999."
regexp_tokenize(txt, pattern='(?:(?!\d)\w)+|\S+')
['Today', 'it', "'s", '07.May', '2011.', 'Or', '2.999.']
The result should be: ['Today', 'it', "'s", '07.May', '2011','.', 'Or', '2.999','.']
What I have to do to get the result above?
I created a pattern to try to include periods and commas occurring inside words, numbers. Hope this helps:
txt = "Today it's 07.May 2011. Or 2.999."
regexp_tokenize(txt, pattern=r'\w+([.,]\w+)*|\S+')
['Today', 'it', "'s", '07.May', '2011', '.', 'Or', '2.999', '.']
来源:https://stackoverflow.com/questions/5214177/regex-tokenizer-to-split-a-text-into-words-digits-and-punctuation-marks