问题
I am trying to Tokenize text using RegexpTokenizer.
Code:
from nltk.tokenize import RegexpTokenizer
#from nltk.tokenize import word_tokenize
line = "U.S.A Count U.S.A. Sec.of U.S. Name:Dr.John Doe J.Doe 1.11 1,000 10--20 10-20"
pattern = '[\d|\.|\,]+|[A-Z][\.|A-Z]+\b[\.]*|[\w]+|\S'
tokenizer = RegexpTokenizer(pattern)
print tokenizer.tokenize(line)
#print word_tokenize(line)
Output:
['U', '.', 'S', '.', 'A', 'Count', 'U', '.', 'S', '.', 'A', '.', 'Sec', '.', 'of', 'U', '.', 'S', '.', 'Name', ':', 'Dr', '.', 'John', 'Doe', 'J', '.', 'Doe', '1.11', '1,000', '10', '-', '-', '20', '10', '-', '20']
Expected Output:
['U.S.A', 'Count', 'U.S.A.', 'Sec', '.', 'of', 'U.S.', 'Name', ':', 'Dr', '.', 'John', 'Doe', 'J.', 'Doe', '1.11', '1,000', '10', '-', '-', '20', '10', '-', '20']
Why tokenizer is also spiltting my expected tokens "U.S.A" , "U.S."? How can I resolve this issue?
My regex : https://regex101.com/r/dS1jW9/1
回答1:
The point is that your \b
was a backspace character, you need to use a raw string literal. Also, you have literal pipes in the character classes that would also mess your output.
This works as expected:
>>> pattern = r'[\d.,]+|[A-Z][.A-Z]+\b\.*|\w+|\S'
>>> tokenizer = RegexpTokenizer(pattern)
>>> print(tokenizer.tokenize(line))
['U.S.A', 'Count', 'U.S.A.', 'Sec', '.', 'of', 'U.S.', 'Name', ':', 'Dr', '.', 'John', 'Doe', 'J.', 'Doe', '1.11', '1,000', '10', '-', '-', '20', '10', '-', '20']
Note that putting a single \w
into a character class is pointless. Also, you do not need to escape every non-word char (like a dot) in the character class as they are mostly treated as literal chars there (only ^
, ]
, -
and \
require special attention).
回答2:
If you mod your regex
pattern = '[USA\.]{4,}|[\w]+|[\S]'
Then
pattern = '[USA\.]{4,}|[\w]+'
tokenizer = RegexpTokenizer(pattern)
print (''+str(tokenizer.tokenize(line)))
You get the output that you wanted
['U.S.A', 'Count', 'U.S.A.', 'Sec', '.', 'of', 'U.S.', 'Name', ':', 'Dr', '.', 'John', 'Doe', 'J', '.', 'Doe', '1', '.', '11', '1', ',', '000', '10', '-', '-', '20', '10', '-', '20']
来源:https://stackoverflow.com/questions/39144991/nltk-nltk-tokenize-regexptokenizer-regex-not-working-as-expected