问题
I am using the Python SKLearn module to perform logistic regression. I have a dependent variable vector Y
(taking values from 1 of M classes) and independent variable matrix X
(with N features). My code is
LR = LogisticRegression()
LR.fit(X,np.resize(Y,(len(Y))))
My question is, what does LR.coef_
and LR.intercept_
represent. I initially thought they held the values intercept(i)
and coef(i,j)
s.t.
log(p(1)/(1-p(1))) = intercept(1) + coef(1,1)*X1 + ... coef(1,N)*XN
.
.
.
log(p(M)/(1-p(M))) = intercept(M) + coef(M,1)*X1 + ... coef(M,N)*XN
where p(i)
is the probability that observation with features [X1, ... ,XN]
is in class i
. However when I try to convert
V = X*LR.coef_.transpose()
U = V + LR.intercept_
A = np.exp(U)
A/(1+A)
so that A
is the matrix of p(1) ... p(M)
for the observations in X
. This should be the same value as
LR.predict_proba(X)
however they are close, but different. Why is this?
回答1:
The coef_
and intercept_
attributes represent what you think, your probability calculations are off because you forgot to normalize: after
P = A / (1 + A)
you should do
P /= P.sum(axis=1).reshape((-1, 1))
to reproduce the scikit-learn algorithm.
来源:https://stackoverflow.com/questions/20442873/python-sklearn-logistic-regression-probabilities