Easy way of counting precision, recall and F1-score in R

后端 未结 7 1280
不思量自难忘°
不思量自难忘° 2021-01-31 04:45

I am using an rpart classifier in R. The question is - I would want to test the trained classifier on a test data. This is fine - I can use the predict.rpart<

相关标签:
7条回答
  • 2021-01-31 05:19

    Just to update this as I came across this thread now, the confusionMatrix function in caretcomputes all of these things for you automatically.

    cm <- confusionMatrix(prediction, reference = test_set$label)
    
    # extract F1 score for all classes
    cm[["byClass"]][ , "F1"] #for multiclass classification problems
    

    You can substitute any of the following for "F1" to extract the relevant values as well:

    "Sensitivity", "Specificity", "Pos Pred Value", "Neg Pred Value", "Precision", "Recall", "F1", "Prevalence", "Detection", "Rate", "Detection Prevalence", "Balanced Accuracy"

    I think this behaves slightly differently when you're only doing a binary classifcation problem, but in both cases, all of these values are computed for you when you look inside the confusionMatrix object, under $byClass

    0 讨论(0)
  • 2021-01-31 05:20

    confusionMatrix() from caret package can be used along with a proper optional field "Positive" specifying which factor should be taken as positive factor.

    confusionMatrix(predicted, Funded, mode = "prec_recall", positive="1")
    

    This code will also give additional values such as F-statistic, Accuracy, etc.

    0 讨论(0)
  • 2021-01-31 05:24

    You can also use the confusionMatrix() provided by caret package. The output includes,between others, Sensitivity (also known as recall) and Pos Pred Value(also known as precision). Then F1 can be easily computed, as stated above, as: F1 <- (2 * precision * recall) / (precision + recall)

    0 讨论(0)
  • 2021-01-31 05:27

    I noticed the comment about F1 score being needed for binary classes. I suspect that it usually is. But a while ago I wrote this in which I was doing classification into several groups denoted by number. This may be of use to you...

    calcF1Scores=function(act,prd){
      #treats the vectors like classes
      #act and prd must be whole numbers
      df=data.frame(act=act,prd=prd);
      scores=list();
      for(i in seq(min(act),max(act))){
        tp=nrow(df[df$prd==i & df$act==i,]);        
        fp=nrow(df[df$prd==i & df$act!=i,]);
        fn=nrow(df[df$prd!=i & df$act==i,]);
        f1=(2*tp)/(2*tp+fp+fn)
        scores[[i]]=f1;
      }      
      print(scores)
      return(scores);
    }
    
    print(mean(unlist(calcF1Scores(c(1,1,3,4,5),c(1,2,3,4,5)))))
    print(mean(unlist(calcF1Scores(c(1,2,3,4,5),c(1,2,3,4,5)))))
    
    0 讨论(0)
  • 2021-01-31 05:29

    The ROCR library calculates all these and more (see also http://rocr.bioinf.mpi-sb.mpg.de):

    library (ROCR);
    ...
    
    y <- ... # logical array of positive / negative cases
    predictions <- ... # array of predictions
    
    pred <- prediction(predictions, y);
    
    # Recall-Precision curve             
    RP.perf <- performance(pred, "prec", "rec");
    
    plot (RP.perf);
    
    # ROC curve
    ROC.perf <- performance(pred, "tpr", "fpr");
    plot (ROC.perf);
    
    # ROC area under the curve
    auc.tmp <- performance(pred,"auc");
    auc <- as.numeric(auc.tmp@y.values)
    
    ...
    
    0 讨论(0)
  • 2021-01-31 05:34

    We can simply get F1 value from caret's confusionMatrix function

    result <- confusionMatrix(Prediction, Lable)
    
    # View confusion matrix overall
    result 
    
    # F1 value
    result$byClass[7] 
    
    0 讨论(0)
提交回复
热议问题