I am using nltk\'s PunkSentenceTokenizer
to tokenize a text to a set of sentences. However, the tokenizer doesn\'t seem to consider new paragraph or new lines as a
Well, I had the same problem and what I have done was split the text in '\n'. Something like this:
# in my case, when it had '\n', I called it a new paragraph,
# like a collection of sentences
paragraphs = [p for p in text.split('\n') if p]
# and here, sent_tokenize each one of the paragraphs
for paragraph in paragraphs:
sentences = tokenizer.tokenize(paragraph)
This is a simplified version of what I had in production, but the general idea is the same. And, sorry about the comments and docstring in portuguese, this was done in 'educational purposes' for brazilian audience
def paragraphs(self):
if self._paragraphs is not None:
for p in self._paragraphs:
yield p
else:
raw_paras = self.raw_text.split(self.paragraph_delimiter)
gen = (Paragraph(self, p) for p in raw_paras if p)
self._paragraphs = []
for p in gen:
self._paragraphs.append(p)
yield p
full code https://gitorious.org/restjor/restjor/source/4d684ea4f18f66b097be1e10cc8814736888dfb4:restjor/decomposition.py#Lundefined