how to view tf-idf score against each word

我是研究僧i 提交于 2020-12-13 05:56:40

问题


I was trying to know the tf-idf scores of each word in my document. However, it only returns values in the matrix but I see a specific type of representation of tf-idf scores against each word.

I have used processed and the code works however I want to change the way it is presented:

code:

from sklearn.feature_extraction.text import CountVectorizer 
from sklearn.feature_extraction.text import TfidfTransformer

bow_transformer = CountVectorizer(analyzer=text_process).fit(df["comments"].head())
print(len(bow_transformer.vocabulary_))

tfidf_transformer = CountVectorizer(analyzer=text_process).fit(messages['message'])
bow_transformer.vocabulary_transformer().fit(message_bow)

message_tfidf = tfidf_transformer.transform(message_bow)

I get the results like this (39028,01),(1393,1672). However, I expect the results to be like

features    tfidf
fruit       0.00344
excellent   0.00289

回答1:


You can achieve the above result by using following code:

def extract_topn_from_vector(feature_names, sorted_items, topn=5):
    """
      get the feature names and tf-idf score of top n items in the doc,                 
      in descending order of scores. 
    """

    # use only top n items from vector.
    sorted_items = sorted_items[:topn]

    results= {} 
    # word index and corresponding tf-idf score
    for idx, score in sorted_items:
        results[feature_names[idx]] = round(score, 3)

    # return a sorted list of tuples with feature name and tf-idf score as its element(in descending order of tf-idf scores).
    return sorted(results.items(), key=lambda kv: kv[1], reverse=True)

feature_names = count_vect.get_feature_names()
coo_matrix = message_tfidf.tocoo()
tuples = zip(coo_matrix.col, coo_matrix.data)
sorted_items = sorted(tuples, key=lambda x: (x[1], x[0]), reverse=True)

# extract only the top n elements.
# Here, n is 10.
word_tfidf = extract_topn_from_vector(feature_names, sorted_items, 10)

print("{}  {}".format("features", "tfidf"))  
for k in word_tfidf:
    print("{} - {}".format(k[0], k[1])) 

Check out the full code below to get a better idea of above code snippet. The below code is self-explanatory.

Full Code:

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from nltk.corpus import stopwords
import string
import nltk
import pandas as pd

data = pd.read_csv('yourfile.csv')

stops = set(stopwords.words("english"))
wl = nltk.WordNetLemmatizer()

def clean_text(text):
    """
      - Remove Punctuations
      - Tokenization
      - Remove Stopwords
      - stemming/lemmatizing
    """
    text_nopunct = "".join([char for char in text if char not in string.punctuation])
    tokens = re.split("\W+", text)
    text = [word for word in tokens if word not in stops]
    text = [wl.lemmatize(word) for word in text]
    return text

def extract_topn_from_vector(feature_names, sorted_items, topn=5):
    """
      get the feature names and tf-idf score of top n items in the doc,                 
      in descending order of scores. 
    """

    # use only top n items from vector.
    sorted_items = sorted_items[:topn]

    results= {} 
    # word index and corresponding tf-idf score
    for idx, score in sorted_items:
        results[feature_names[idx]] = round(score, 3)

    # return a sorted list of tuples with feature name and tf-idf score as its element(in descending order of tf-idf scores).
    return sorted(results.items(), key=lambda kv: kv[1], reverse=True)

count_vect = CountVectorizer(analyzer=clean_text, tokenizer = None, preprocessor = None, stop_words = None, max_features = 5000)                                        
freq_term_matrix = count_vect.fit_transform(data['text_body'])

tfidf = TfidfTransformer(norm="l2")
tfidf.fit(freq_term_matrix)  

feature_names = count_vect.get_feature_names()

# sample document
doc = 'watched horrid thing TV. Needless say one movies watch see much worse get.'

tf_idf_vector = tfidf.transform(count_vect.transform([doc]))

coo_matrix = tf_idf_vector.tocoo()
tuples = zip(coo_matrix.col, coo_matrix.data)
sorted_items = sorted(tuples, key=lambda x: (x[1], x[0]), reverse=True)

# extract only the top n elements.
# Here, n is 10.
word_tfidf = extract_topn_from_vector(feature_names,sorted_items,10)

print("{}  {}".format("features", "tfidf"))  
for k in word_tfidf:
    print("{} - {}".format(k[0], k[1])) 

Sample output:

features  tfidf
Needless - 0.515
horrid - 0.501
worse - 0.312
watched - 0.275
TV - 0.272
say - 0.202
watch - 0.199
thing - 0.189
much - 0.177
see - 0.164



回答2:


from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
vect = TfidfVectorizer()
tfidf_matrix = vect.fit_transform(documents["comments"])
df = pd.DataFrame(tfidf_matrix.toarray(),columns=vect.get_feature_names())
print(df)

sklearn : TFIDF Transformer : How to get tf-idf values of given words in document



来源:https://stackoverflow.com/questions/56914017/how-to-view-tf-idf-score-against-each-word

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!