There are so many guides on how to tokenize a sentence, but i didn\'t find any on how to do the opposite.
import nltk
words = nltk.word_tokenize(\"I\'ve found
For me, it worked when I installed python nltk 3.2.5,
pip install -U nltk
then,
import nltk
nltk.download('perluniprops')
from nltk.tokenize.moses import MosesDetokenizer
If you are using insides pandas dataframe, then
df['detoken']=df['token_column'].apply(lambda x: detokenizer.detokenize(x, return_str=True))