nltk sentence tokenizer, consider new lines as sentence boundary

安稳与你 提交于 2019-12-04 15:56:18

问题


I am using nltk's PunkSentenceTokenizer to tokenize a text to a set of sentences. However, the tokenizer doesn't seem to consider new paragraph or new lines as a new sentence.

>>> from nltk.tokenize.punkt import PunktSentenceTokenizer
>>> tokenizer = PunktSentenceTokenizer()
>>> tokenizer.tokenize('Sentence 1 \n Sentence 2. Sentence 3.')
['Sentence 1 \n Sentence 2.', 'Sentence 3.']
>>> tokenizer.span_tokenize('Sentence 1 \n Sentence 2. Sentence 3.')
[(0, 24), (25, 36)]

I would like it to to consider new lines as a boundary of sentences as well. Anyway to do this (I need to save the offsets too)?


回答1:


Well, I had the same problem and what I have done was split the text in '\n'. Something like this:

# in my case, when it had '\n', I called it a new paragraph, 
# like a collection of sentences
paragraphs = [p for p in text.split('\n') if p]
# and here, sent_tokenize each one of the paragraphs
for paragraph in paragraphs:
    sentences = tokenizer.tokenize(paragraph)

This is a simplified version of what I had in production, but the general idea is the same. And, sorry about the comments and docstring in portuguese, this was done in 'educational purposes' for brazilian audience

def paragraphs(self):
    if self._paragraphs is not None:
        for p in  self._paragraphs:
            yield p
    else:
        raw_paras = self.raw_text.split(self.paragraph_delimiter)
        gen = (Paragraph(self, p) for p in raw_paras if p)
        self._paragraphs = []
        for p in gen:
            self._paragraphs.append(p)
            yield p

full code https://gitorious.org/restjor/restjor/source/4d684ea4f18f66b097be1e10cc8814736888dfb4:restjor/decomposition.py#Lundefined



来源:https://stackoverflow.com/questions/29041603/nltk-sentence-tokenizer-consider-new-lines-as-sentence-boundary

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!