I have a json file containing texts like:
dr. goldberg offers everything.parking is good.he\'s nice and easy to talk
How can I
you can use nltk.tokenize
:
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
f=open("test_data.json").read()
sentences=sent_tokenize(f)
my_sentence=[sent for sent in sentences if 'parking' in word_tokenize(sent)] #this gave you the all sentences that your special word is in it !
and as a complete way you can use a function :
>>> def sentence_finder(text,word):
... sentences=sent_tokenize(text)
... return [sent for sent in sentences if word in word_tokenize(sent)]
>>> s="dr. goldberg offers everything. parking is good. he's nice and easy to talk"
>>> sentence_finder(s,'parking')
['parking is good.']