Interface | Description |
---|---|
ClassificationScore |
This interface defines the contract for evaluating or "scoring" the results
on a classification problem.
|
Class | Description |
---|---|
Accuracy |
Evaluates a classifier based on its accuracy in predicting the correct class.
|
AUC |
Computes the Area Under the ROC Curve as an evaluation of classification
scores.
|
F1Score | |
FbetaScore |
The Fβ score is the generalization of
F1Score , where
β indicates the level of preference for precision over recall. |
Kappa |
Evaluates a classifier based on the Kappa statistic.
|
LogLoss |
This computes the multi-class Log Loss
- 1/N Σ∀ i ∈ N log(pi, y) Where N is the number of data points and pi, y is the estimated probability of the true class label. |
MatthewsCorrelationCoefficient |
Evaluates a classifier based on Mathews Correlation Coefficient
|
Precision |
Evaluates a classifier based on the Precision, where the class of index 0
is considered the positive class.
|
Recall |
Evaluates a classifier based on the Recall rate, where the class of index 0
is considered the positive class.
|
SimpleBinaryClassMetric |
This is a base class for scores that can be computed from simple counts of
the true positives, true negatives, false positives, and false negatives.
|
Copyright © 2017. All rights reserved.