Comparing Results from StandardScaler vs Normalizer in Linear Regression

前端 未结 3 2209
无人及你
无人及你 2021-02-18 21:40

I\'m working through some examples of Linear Regression under different scenarios, comparing the results from using Normalizer and StandardScaler, and

相关标签:
3条回答
  • 2021-02-18 22:07

    The last question (3) about the incoherent results with fit_intercept=0 and standardized data has not been answered fully.

    The OP is likely expecting StandardScaler to standardize X and y, which would make the intercept necessarily 0 (proof 1/3 of the way down).

    However StandardScaler ignores y. see the api.

    TransformedTargetRegressor offers a solution. This approach is also useful for non-linear transformations of the dependent variable such as the log transformation of y (but consider this).

    Here's an example that resolves OP's issue #3:

    from sklearn.base import BaseEstimator, TransformerMixin
    from sklearn.pipeline import make_pipeline
    from sklearn.datasets import make_regression
    from sklearn.compose import TransformedTargetRegressor
    from sklearn.linear_model import LinearRegression
    from sklearn.preprocessing import StandardScaler
    import numpy as np
    
    # define a custom transformer
    class stdY(BaseEstimator,TransformerMixin):
        def __init__(self):
            pass
        def fit(self,Y):
            self.std_err_=np.std(Y)
            self.mean_=np.mean(Y)
            return self
        def transform(self,Y):
            return (Y-self.mean_)/self.std_err_
        def inverse_transform(self,Y):
            return Y*self.std_err_+self.mean_
    
    # standardize X and no intercept pipeline
    no_int_pipe=make_pipeline(StandardScaler(),LinearRegression(fit_intercept=0)) # only standardizing X, so not expecting a great fit by itself.
    
    # standardize y pipeline
    std_lin_reg=TransformedTargetRegressor(regressor=no_int_pipe, transformer=stdY()) # transforms y, estimates the model, then reverses the transformation for evaluating loss.
    
    #after returning to re-read my answer, there's an even easier solution, use StandardScaler as the transfromer:
    std_lin_reg_easy=TransformedTargetRegressor(regressor=no_int_pipe, transformer=StandardScaler())
    
    # generate some simple data
    X, y, w = make_regression(n_samples=100,
                              n_features=3, # x variables generated and returned 
                              n_informative=3, # x variables included in the actual model of y
                              effective_rank=3, # make less than n_informative for multicollinearity
                              coef=True,
                              noise=0.1,
                              random_state=0,
                              bias=10)
    
    std_lin_reg.fit(X,y)
    print('custom transformer on y and no intercept r2_score: ',std_lin_reg.score(X,y))
    
    std_lin_reg_easy.fit(X,y)
    print('standard scaler on y and no intercept r2_score: ',std_lin_reg_easy.score(X,y))
    
    no_int_pipe.fit(X,y)
    print('\nonly standard scalar and no intercept r2_score: ',no_int_pipe.score(X,y))
    

    which returns

    custom transformer on y and no intercept r2_score:  0.9999343800041816
    
    standard scaler on y and no intercept r2_score:  0.9999343800041816
    
    only standard scalar and no intercept r2_score:  0.3319175799267782
    
    0 讨论(0)
  • 2021-02-18 22:13

    Answer to Q1

    I am assuming that what you mean with the first 2 models is reg1 and reg2. Let us know if that is not the case.

    A linear regression has the same predictive power if you normalize the data or not. Therefore, using normalize=True has no impact on the predictions. One way to understand this is to see that normalization (column-wise) is a linear operation on each of the columns ((x-a)/b) and linear transformations of the data on a Linear regression does not affect coefficient estimation, only change their values. Notice that this statement is not true for Lasso/Ridge/ElasticNet.

    So, why aren't the coefficients different? Well, normalize=True also takes into account that what the user normally wants is the coefficients on the original features, not the normalised features. As such, it adjusts the coefficients. One way to check that this makes sense is to use a simpler example:

    # two features, normal distributed with sigma=10
    x1 = np.random.normal(0, 10, size=100)
    x2 = np.random.normal(0, 10, size=100)
    
    # y is related to each of them plus some noise
    y = 3 + 2*x1 + 1*x2 + np.random.normal(0, 1, size=100)
    
    X = np.array([x1, x2]).T  # X has two columns
    
    reg1 = LinearRegression().fit(X, y)
    reg2 = LinearRegression(normalize=True).fit(X, y)
    
    # check that coefficients are the same and equal to [2,1]
    np.testing.assert_allclose(reg1.coef_, reg2.coef_) 
    np.testing.assert_allclose(reg1.coef_, np.array([2, 1]), rtol=0.01)
    

    Which confirms that both methods correctly capture the real signal between [x1,x2] and y, namely, the 2 and 1 respectively.

    Answer to Q2

    Normalizer is not what you would expect. It normalises each row row-wise. So, the results will change dramatically, and likely destroy relationship between features and the target that you want to avoid except for specific cases (e.g. TF-IDF).

    To see how, assume the example above, but consider a different feature, x3, that is not related with y. Using Normalizer causes x1 to be modifed by the value of x3, decreasing the strenght of its relationship with y.

    Discrepancy of coefficients between models (1,2) and (4,5)

    The discrepancy between the coefficients is that when you standardise before fitting, the coefficients will be with respect to the standardised features, the same coefficients I referred in the first part of the answer. They can be mapped to the original parameters using reg4.coef_ / scaler.scale_:

    x1 = np.random.normal(0, 10, size=100)
    x2 = np.random.normal(0, 10, size=100)
    y = 3 + 2*x1 + 1*x2 + np.random.normal(0, 1, size=100)
    X = np.array([x1, x2]).T
    
    reg1 = LinearRegression().fit(X, y)
    reg2 = LinearRegression(normalize=True).fit(X, y)
    scaler = StandardScaler()
    reg4 = LinearRegression().fit(scaler.fit_transform(X), y)
    
    np.testing.assert_allclose(reg1.coef_, reg2.coef_) 
    np.testing.assert_allclose(reg1.coef_, np.array([2, 1]), rtol=0.01)
    
    # here
    coefficients = reg4.coef_ / scaler.scale_
    np.testing.assert_allclose(coefficients, np.array([2, 1]), rtol=0.01)
    

    This is because, mathematically, setting z = (x - mu)/sigma, the model reg4 is solving y = a1*z1 + a2*z2 + a0. We can recover the relationship between y and x through simple algebra: y = a1*[(x1 - mu1)/sigma1] + a2*[(x2 - mu2)/sigma2] + a0, which can be simplified to y = (a1/sigma1)*x1 + (a2/sigma2)*x2 + (a0 - a1*mu1/sigma1 - a2*mu2/sigma2).

    reg4.coef_ / scaler.scale_ represents [a1/sigma1, a2/sigma2] in the above notation, which is exactly what normalize=True does to guarantee that the coefficients are the same.

    Descrepancy of score of model 5.

    Standardized features are zero mean, but the target variable is not necessarily. Therefore, not fiting the intercept causes the model to disregard the mean of the target. In the example that I have been using, the "3" in y = 3 + ... is not fitted, which naturally decreases the predictive power of the model. :)

    0 讨论(0)
  • 2021-02-18 22:20
    1. The reason for no difference in co-efficients between the first two models is that Sklearn de-normalize the co-efficients behind the scenes after calculating the co-effs from normalized input data. Reference

    This de-normalization has been done because for test data, we can directly apply the co-effs. and get the prediction without normalizing the test data.

    Hence, setting normalize=True do have impact on co-efficients but they dont affect the best fit line anyway.

    1. Normalizer does the normalization with respect to each sample (meaning row-wise). You see the reference code here.

    From documentation:

    Normalize samples individually to unit norm.

    whereas normalize=True does the normalization with respect to each column/ feature. Reference

    Example to understand the impact of normalization at different dimension of the data. Let us take two dimensions x1 & x2 and y be the target variable. Target variable value is color coded in the figure.

    import matplotlib.pyplot as plt
    from sklearn.preprocessing import Normalizer,StandardScaler
    from sklearn.preprocessing.data import normalize
    
    n=50
    x1 = np.random.normal(0, 2, size=n)
    x2 = np.random.normal(0, 2, size=n)
    noise = np.random.normal(0, 1, size=n)
    y = 5 + 0.5*x1 + 2.5*x2 + noise
    
    fig,ax=plt.subplots(1,4,figsize=(20,6))
    
    ax[0].scatter(x1,x2,c=y)
    ax[0].set_title('raw_data',size=15)
    
    X = np.column_stack((x1,x2))
    
    column_normalized=normalize(X, axis=0)
    ax[1].scatter(column_normalized[:,0],column_normalized[:,1],c=y)
    ax[1].set_title('column_normalized data',size=15)
    
    row_normalized=Normalizer().fit_transform(X)
    ax[2].scatter(row_normalized[:,0],row_normalized[:,1],c=y)
    ax[2].set_title('row_normalized data',size=15)
    
    standardized_data=StandardScaler().fit_transform(X)
    ax[3].scatter(standardized_data[:,0],standardized_data[:,1],c=y)
    ax[3].set_title('standardized data',size=15)
    
    plt.subplots_adjust(left=0.3, bottom=None, right=0.9, top=None, wspace=0.3, hspace=None)
    plt.show()
    

    You could see that best fit line for data in fig 1,2 and 4 would be the same; signifies that the R2_-score will not change due to column/feature normalization or standardizing data. Just that, it ends up with different co-effs. values.

    Note: best fit line for fig3 would be different.

    1. When you set the fit_intercept=False, bias term is subtracted from the prediction. Meaning the intercept is set to zero, which otherwise would have been mean of target variable.

    The prediction with intercept as zero would be expected to perform bad for problems where target variables are not scaled (mean =0). You can see a difference of 22.532 in every row, which signifies the impact of the output.

    0 讨论(0)
提交回复
热议问题