How does TfidfVectorizer compute scores on test data

自闭症网瘾萝莉.ら 提交于 2020-05-13 05:36:06

问题


In scikit-learn TfidfVectorizer allows us to fit over training data, and later use the same vectorizer to transform over our test data. The output of the transformation over the train data is a matrix that represents a tf-idf score for each word for a given document.

However, how does the fitted vectorizer compute the score for new inputs? I have guessed that either:

  1. The score of a word in a new document computed by some aggregation of the scores of the same word over documents in the training set.
  2. The new document is 'added' to the existing corpus and new scores are calculated.

I have tried deducing the operation from scikit-learn's source code but could not quite figure it out. Is it one of the options I've previously mentioned or something else entirely? Please assist.


回答1:


It is definitely the former: each word's idf (inverse document-frequency) is calculated based on the training documents only. This makes sense because these values are precisely the ones that are calculated when you call fit on your vectorizer. If the second option you describe was true, we would essentially refit a vectorizer each time, and we would also cause information leak as idf's from the test set would be used during model evaluation.

Beyond these purely conceptual explanations, you can also run the following code to convince yourself:

from sklearn.feature_extraction.text import TfidfVectorizer
vect = TfidfVectorizer()
x_train = ["We love apples", "We really love bananas"]
vect.fit(x_train)
print(vect.get_feature_names())
>>> ['apples', 'bananas', 'love', 'really', 'we']

x_test = ["We really love pears"]

vectorized = vect.transform(x_test)
print(vectorized.toarray())
>>> array([[0.        , 0.        , 0.50154891, 0.70490949, 0.50154891]])

Following the reasoning of how the fit methodology works, you can recalculate these tfidf values yourself:

"apples" and "bananas" obviously have a tfidf score of 0 because they do not appear in x_test. "pears", on the other hand, does not exist in x_train and so will not even appear in the vectorization. Hence, only "love", "really" and "we" will have a tfidf score.

Scikit-learn implements tfidf as log((1+n)/(1+df) + 1) * f where n is the number of documents in the training set (2 for us), df the number of documents in which the word appears in the training set only, and f the frequency count of the word in the test set. Hence:

tfidf_love = (np.log((1+2)/(1+2))+1)*1
tfidf_really = (np.log((1+2)/(1+1))+1)*1
tfidf_we = (np.log((1+2)/(1+2))+1)*1

You then need to scale these tfidf scores by the L2 distance of your document:

tfidf_non_scaled = np.array([tfidf_love,tfidf_really,tfidf_we])
tfidf_list = tfidf_non_scaled/sum(tfidf_non_scaled**2)**0.5

print(tfidf_list)
>>> [0.50154891 0.70490949 0.50154891]

You can see that indeed, we are getting the same values, which confirms the way scikit-learn implemented this methodology.



来源:https://stackoverflow.com/questions/55707577/how-does-tfidfvectorizer-compute-scores-on-test-data

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!