Scikit-learn is returning coefficient of determination (R^2) values less than -1

匿名 (未验证) 提交于 2019-12-03 02:48:02

问题:

I'm doing a simple linear model. I have

fire = load_data() regr = linear_model.LinearRegression() scores = cross_validation.cross_val_score(regr, fire.data, fire.target, cv=10, scoring='r2') print scores 

which yields

[  0.00000000e+00   0.00000000e+00  -8.27299054e+02  -5.80431382e+00   -1.04444147e-01  -1.19367785e+00  -1.24843536e+00  -3.39950443e-01    1.95018287e-02  -9.73940970e-02] 

How is this possible? When I do the same thing with the built in diabetes data, it works perfectly fine, but for my data, it returns these seemingly absurd results. Have I done something wrong?

回答1:

There is no reason r^2 shouldn't be negative (despite the ^2 in its name). This is also stated in the doc. You can see r^2 as the comparison of your model fit (in the context of linear regression, e.g a model of order 1 (affine)) to a model of order 0 (just fitting a constant), both by minimizing a squared loss. The constant minimizing the squared error is the mean. Since you are doing cross validation with left out data, it can happen that the mean of your test set is wildly different from the mean of your training set. This alone can induce a much higher incurred squared error in your prediction versus just predicting the mean of the test data, which results in a negative r^2 score.

In worst case, if your data do not explain your target at all, these scores can become very strongly negative. Try

import numpy as np rng = np.random.RandomState(42) X = rng.randn(100, 80) y = rng.randn(100)  # y has nothing to do with X whatsoever from sklearn.linear_model import LinearRegression from sklearn.cross_validation import cross_val_score scores = cross_val_score(LinearRegression(), X, y, cv=5, scoring='r2') 

This should result in negative r^2 values.

In [23]: scores Out[23]:  array([-240.17927358,   -5.51819556,  -14.06815196,  -67.87003867,     -64.14367035]) 

The important question now is whether this is due to the fact that linear models just do not find anything in your data, or to something else that may be fixed in the preprocessing of your data. Have you tried scaling your columns to have mean 0 and variance 1? You can do this using sklearn.preprocessing.StandardScaler. As a matter of fact, you should create a new estimator by concatenating a StandardScaler and the LinearRegression into a pipeline using sklearn.pipeline.Pipeline. Next you may want to try Ridge regression.



回答2:

>>> x = np.arange(50, dtype=float) >>> y = x >>> def f(x): return -100 ... >>> rss = np.sum((y - f(x)) ** 2) >>> tss = np.sum((y - y.mean()) ** 2) >>> 1 - rss / tss -74.430972388955581 


回答3:

Just because R^2 can be negative does not mean it should be.

Possibility 1: a bug in your code.

A common bug that you should double check is that you are passing in parameters correctly:

r2_score(y_true, y_pred) # Correct! r2_score(y_pred, y_true) # Incorrect!!!! 

Possibility 2: small datasets

If you are getting a negative R^2, you could also check for over fitting. Keep in mind that cross_validation.cross_val_score() does not randomly shuffle your inputs, so if your sample are inadvertently sorted (by date for example) then you might build models on each fold that are not predictive for the other folds.

Try reducing the number of features, increasing the number samples, and decreasing the number of folds (if you are using cross_validation). While there is no official rule here, your m x n dataset (where m is the number of samples and n is the number of features) should be of a shape where

m > n^2 

and when you using cross validation with f as the number of folds, you should aim for

m/f > n^2 


标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!