auc

roc_auc_score mismatch between y_test and y_score

roc_auc_score mismatch between y_test and y_score Question: I’m trying to calculate the following: auc = roc_auc_score(gt, pr, multi_class="ovr") where gt is a list sized 3470208 containing values between 0 and 41 (all int) and pr is a list sized 3470208 (same size) of lists that each is sized 42 with probabilities in each location that …

Total answers: 3

Interpreting AUC, accuracy and f1-score on the unbalanced dataset

Interpreting AUC, accuracy and f1-score on the unbalanced dataset Question: I am trying to understand how AUC is a better metric than classification accuracy in the case when the dataset is unbalanced. Suppose a dataset is containing 1000 examples of 3 classes as follows: a = [[1.0, 0, 0]]*950 + [[0, 1.0, 0]]*30 + [[0, …

Total answers: 1

ROC curve for Isolation Forest

ROC curve for Isolation Forest Question: I am trying to plot the ROC curve to evaluate the accuracy of Isolation Forest for a Breast Cancer dataset. I calculated the True Positive rate (TPR) and False Positive Rate (FPR) from the confusion matrix. However, I do not understand how the TPR and FPR are in the …

Total answers: 3