confusion-matrix

How to add correct labels for Seaborn Confusion Matrix

假如想象 提交于 2021-01-29 05:07:20
问题 I have plotted my data into a confusion matrix using seaborn but I ran into a problem. The problem is that it is only showing numbers from 0 to 11, on both axes, because I have 12 different labels. My code looks as follows: cf_matrix = confusion_matrix(y_test, y_pred) fig, ax = plt.subplots(figsize=(15,10)) sns.heatmap(cf_matrix, linewidths=1, annot=True, ax=ax, fmt='g') Here you can see my confusion matrix: I am getting the confusion matrix as I should. The only problem is the names of the

Error in table(data, reference, dnn = dnn, …) : all arguments must have the same length when run confusionMatrix with caret, in R

吃可爱长大的小学妹 提交于 2021-01-28 08:46:50
问题 I have an issue running a confusionMatrix. here is what I do: rf <- caret::train(tested ~., data = training_data, method = "rf", trControl = ctrlInside, metric = "ROC", na.action = na.exclude) rf After I get my model this is the next step I take: evalResult.rf <- predict(rf, testing_data, type = "prob") predict_rf <- as.factor(ifelse(evalResult.rf <0.5, "positive", "negative")) And then I am running my confusion matrix. cm_rf_forest <- confusionMatrix(predict_rf, testing_data$tested,

confusionMatrix for logistic regression in R

我怕爱的太早我们不能终老 提交于 2021-01-28 07:52:52
问题 I want to calculate two confusion matrix for my logistic regression using my training data and my testing data: logitMod <- glm(LoanStatus_B ~ ., data=train, family=binomial(link="logit")) i set the threshold of predicted probability at 0.5: confusionMatrix(table(predict(logitMod, type="response") >= 0.5, train$LoanStatus_B == 1)) And the the code below works well for my training set. However, when i use the test set: confusionMatrix(table(predict(logitMod, type="response") >= 0.5, test

What's the correct way to compute a confusion matrix for object detection?

試著忘記壹切 提交于 2021-01-17 09:19:27
问题 I am trying to compute a confusion matrix for my object detection model. However I seem to stumble across some pitfalls. My current approach is to compare each predicted box with each groundtruth box. If they have a IoU > some threshold, I insert the predictions into the confusion matrix. After the insertion I delete the element in the predictions list and move on to the next element. Because I also want the misclassified proposals to be inserted in the confusion matrix, I treat the elements

What's the correct way to compute a confusion matrix for object detection?

泪湿孤枕 提交于 2021-01-17 09:17:50
问题 I am trying to compute a confusion matrix for my object detection model. However I seem to stumble across some pitfalls. My current approach is to compare each predicted box with each groundtruth box. If they have a IoU > some threshold, I insert the predictions into the confusion matrix. After the insertion I delete the element in the predictions list and move on to the next element. Because I also want the misclassified proposals to be inserted in the confusion matrix, I treat the elements

Why is my confusion matrix returning only one number?

。_饼干妹妹 提交于 2020-12-13 06:31:57
问题 I'm doing a binary classification. Whenever my prediction equals the ground truth, I find sklearn.metrics.confusion_matrix to return a single value. Isn't there a problem? from sklearn.metrics import confusion_matrix print(confusion_matrix([True, True], [True, True]) # [[2]] I would expect something like: [[2 0] [0 0]] 回答1: Solution: Should you want to have the desired output, you should fill-in labels=[True, False] : from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true=

Why is my confusion matrix returning only one number?

天大地大妈咪最大 提交于 2020-12-13 06:30:31
问题 I'm doing a binary classification. Whenever my prediction equals the ground truth, I find sklearn.metrics.confusion_matrix to return a single value. Isn't there a problem? from sklearn.metrics import confusion_matrix print(confusion_matrix([True, True], [True, True]) # [[2]] I would expect something like: [[2 0] [0 0]] 回答1: Solution: Should you want to have the desired output, you should fill-in labels=[True, False] : from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true=

Sci-kit learn how to print labels for confusion matrix?

99封情书 提交于 2020-12-01 09:21:37
问题 So I'm using sci-kit learn to classify some data. I have 13 different class values/categorizes to classify the data to. Now I have been able to use cross validation and print the confusion matrix. However, it only shows the TP and FP etc without the classlabels, so I don't know which class is what. Below is my code and my output: def classify_data(df, feature_cols, file): nbr_folds = 5 RANDOM_STATE = 0 attributes = df.loc[:, feature_cols] # Also known as x class_label = df['task'] # Class