logistic-regression

Scikit Learn: Logistic Regression model coefficients: Clarification

一曲冷凌霜 提交于 2019-12-29 11:35:10
问题 I need to know how to return the logistic regression coefficients in such a manner that I can generate the predicted probabilities myself. My code looks like this: lr = LogisticRegression() lr.fit(training_data, binary_labels) # Generate probabities automatically predicted_probs = lr.predict_proba(binary_labels) I had assumed the lr.coeff_ values would follow typical logistic regression, so that I could return the predicted probabilities like this: sigmoid( dot([val1, val2, offset], lr.coef_

Cost function in logistic regression gives NaN as a result

被刻印的时光 ゝ 提交于 2019-12-28 02:50:27
问题 I am implementing logistic regression using batch gradient descent. There are two classes into which the input samples are to be classified. The classes are 1 and 0. While training the data, I am using the following sigmoid function: t = 1 ./ (1 + exp(-z)); where z = x*theta And I am using the following cost function to calculate cost, to determine when to stop training. function cost = computeCost(x, y, theta) htheta = sigmoid(x*theta); cost = sum(-y .* log(htheta) - (1-y) .* log(1-htheta));

Logistic Regression on factor: Error in eval(family$initialize) : y values must be 0 <= y <= 1

核能气质少年 提交于 2019-12-26 07:45:06
问题 Not able to fix the below error for the below logistic regression training=(IBM$Serial<625) data=IBM[!training,] dim(data) stock.direction <- data$Direction training_model=glm(stock.direction~data$lag2,data=data,family=binomial) ###Error### ---- Error in eval(family$initialize) : y values must be 0 <= y <= 1 Few rows from the data i am using X Date Open High Low Close Adj.Close Volume Return lag1 lag2 lag3 Direction Serial 1 28-11-2012 190.979996 192.039993 189.270004 191.979996 165.107727

Logistic Regression on factor: Error in eval(family$initialize) : y values must be 0 <= y <= 1

痴心易碎 提交于 2019-12-26 07:44:27
问题 Not able to fix the below error for the below logistic regression training=(IBM$Serial<625) data=IBM[!training,] dim(data) stock.direction <- data$Direction training_model=glm(stock.direction~data$lag2,data=data,family=binomial) ###Error### ---- Error in eval(family$initialize) : y values must be 0 <= y <= 1 Few rows from the data i am using X Date Open High Low Close Adj.Close Volume Return lag1 lag2 lag3 Direction Serial 1 28-11-2012 190.979996 192.039993 189.270004 191.979996 165.107727

Logistic Regression on factor: Error in eval(family$initialize) : y values must be 0 <= y <= 1

主宰稳场 提交于 2019-12-26 07:43:32
问题 Not able to fix the below error for the below logistic regression training=(IBM$Serial<625) data=IBM[!training,] dim(data) stock.direction <- data$Direction training_model=glm(stock.direction~data$lag2,data=data,family=binomial) ###Error### ---- Error in eval(family$initialize) : y values must be 0 <= y <= 1 Few rows from the data i am using X Date Open High Low Close Adj.Close Volume Return lag1 lag2 lag3 Direction Serial 1 28-11-2012 190.979996 192.039993 189.270004 191.979996 165.107727

Logistic Regression on factor: Error in eval(family$initialize) : y values must be 0 <= y <= 1

五迷三道 提交于 2019-12-26 07:42:14
问题 Not able to fix the below error for the below logistic regression training=(IBM$Serial<625) data=IBM[!training,] dim(data) stock.direction <- data$Direction training_model=glm(stock.direction~data$lag2,data=data,family=binomial) ###Error### ---- Error in eval(family$initialize) : y values must be 0 <= y <= 1 Few rows from the data i am using X Date Open High Low Close Adj.Close Volume Return lag1 lag2 lag3 Direction Serial 1 28-11-2012 190.979996 192.039993 189.270004 191.979996 165.107727

Why does my weights get normalized when I perform Logistic Regression With SGD in spark?

核能气质少年 提交于 2019-12-25 09:27:22
问题 I recently asked a question being confused about the weights I was receiving for the synthetic dataset I created. The answer I received was that the weights are being normalized. You can look at the details here. I'm wondering why LogisticRegressionWithSGD gives normalized weights whereas everything is fine in case of LBFGS in the same spark implementation. Is it possible that the weights weren't converging after all? Weights I'm getting [0.466521045342,0.699614292387,0.932673108363,0

Coursera ML - Does the choice of optimization algorithm affect the accuracy of multiclass logistic regression?

≡放荡痞女 提交于 2019-12-25 08:59:41
问题 I recently completed exercise 3 of Andrew Ng's Machine Learning on Coursera using Python. When initially completing parts 1.4 to 1.4.1 of the exercise, I ran into difficulties ensuring that my trained model has the accuracy that matches the expected 94.9%. Even after debugging and ensuring that my cost and gradient functions were bug free, and that my predictor code was working correctly, I was still getting only 90.3% accuracy. I was using the conjugate gradient (CG) algorithm in scipy

Issues with Logistic Regression for multiclass classification using PySpark

女生的网名这么多〃 提交于 2019-12-25 08:58:10
问题 I am trying to use Logistic Regression to classify the datasets which has Sparse Vector in feature vector: For full code base and error log, please check my github repo Case 1 : I tried using the pipeline of ML as follow: # imported library from ML from pyspark.ml.feature import HashingTF from pyspark.ml import Pipeline from pyspark.ml.classification import LogisticRegression print(type(trainingData)) # for checking only print(trainingData.take(2)) # for of data type lr = LogisticRegression

MLE error in R: non-finite finite-difference value/ value in 'vmmin' is not finite

醉酒当歌 提交于 2019-12-25 07:59:16
问题 I am working on a loss aversion model in R (beginner) and want to estimate some parameters, from a dataset with 3 columns (loss/gain values (both continous and a column with decisions coded as 0 or 1 (binary)) dropbox.com/s/fpw3obrqcx8ld1q/GrandAverage.RData?dl=0 The part of the code if have to use for this I am using is given below: set <- GrandAverage[, 5:7]; Beh.Parameters <- function (lambda, alpha, temp) { u = 0.5 * set$Gain^alpha + 0.5 * lambda * set$Loss^alpha GambleProbability <- 1 /