`warm_start` Parameter And Its Impact On Computational Time

梦想的初衷 提交于 2019-12-07 08:23:48

问题


I have a logistic regression model with a defined set of parameters (warm_start=True).

As always, I call LogisticRegression.fit(X_train, y_train) and use the model after to predict new outcomes.

Suppose I alter some parameters, say, C=100 and call .fit method again using the same training data.


Theoretically, for the second time, I think .fit should take less computational time as compared to the model with warm_start=False. However, empirically is not actually true.

Please, help me understand the concept of warm_start parameter.

P.S.: I have also implemented SGDClassifier() for an experimentation.


回答1:


I hope you understand the concept of using the previous solution as an initialization for the following fit with warm_start=True.

Documentation states that warm_start parameter is useless with liblinear solver as there is no working implementation for a special linear case. To add, liblinear solver is a default choice for LogisticRegression which basically means that weights will be completely reinstantiated before each new fit.

To utilize warm_start parameter and reduce the computational time you should use one of the following solvers for your LogisticRegression:

  • newton-cg or lbfgs with a support of L2-norm penalty. They are also usually better with multiclassification problems;
  • sag or saga which converge faster on larger datasets than liblinear solver and use multinomial loss during descent.

Simple example

from sklearn.linear_model import LogisticRegression

X = [[1, 2, 3], [4, 5, 6], [1, 2, 3]]
y = [1, 0, 1]

# warm_start would work fine before each new fit
clf = LogisticRegression(solver='sag', warm_start=True)

clf.fit(X, y)

I hope that helps.



来源:https://stackoverflow.com/questions/45651096/warm-start-parameter-and-its-impact-on-computational-time

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!