how to use word_tokenize in data frame

后端 未结 4 2195
南笙
南笙 2020-12-23 12:23

I have recently started using the nltk module for text analysis. I am stuck at a point. I want to use word_tokenize on a dataframe, so as to obtain all the words used in a p

4条回答
  •  礼貌的吻别
    2020-12-23 13:27

    You can use apply method of DataFrame API:

    import pandas as pd
    import nltk
    
    df = pd.DataFrame({'sentences': ['This is a very good site. I will recommend it to others.', 'Can you please give me a call at 9983938428. have issues with the listings.', 'good work! keep it up']})
    df['tokenized_sents'] = df.apply(lambda row: nltk.word_tokenize(row['sentences']), axis=1)
    

    Output:

    >>> df
                                               sentences  \
    0  This is a very good site. I will recommend it ...   
    1  Can you please give me a call at 9983938428. h...   
    2                              good work! keep it up   
    
                                         tokenized_sents  
    0  [This, is, a, very, good, site, ., I, will, re...  
    1  [Can, you, please, give, me, a, call, at, 9983...  
    2                      [good, work, !, keep, it, up]
    

    For finding the length of each text try to use apply and lambda function again:

    df['sents_length'] = df.apply(lambda row: len(row['tokenized_sents']), axis=1)
    
    >>> df
                                               sentences  \
    0  This is a very good site. I will recommend it ...   
    1  Can you please give me a call at 9983938428. h...   
    2                              good work! keep it up   
    
                                         tokenized_sents  sents_length  
    0  [This, is, a, very, good, site, ., I, will, re...            14  
    1  [Can, you, please, give, me, a, call, at, 9983...            15  
    2                      [good, work, !, keep, it, up]             6  
    

提交回复
热议问题