<model_name>!SHOW_THRESHOLD_METRICS¶
Returns raw counts and metrics for a specific threshold for each class in models where evaluation was enabled at instantiation. This method takes no arguments. See Metrics in show_threshold_metrics.
Output¶
Column |
Type |
Description |
---|---|---|
|
The name of the dataset used for metrics calculation, currently EVAL. |
|
|
The predicted class. Each class has its own set of metrics, which are provided in multiple rows. |
|
|
Threshold used to generate predictions. |
|
|
Precision for the given class. The ratio of true positives to the total predicted positives. |
|
|
Recall for the given class. Also called “sensitivity.” The ratio of true positives to the total actual positives. |
|
|
F1 score for the given class. |
|
|
True positive rate for the given class. |
|
|
False positive rate for the given class. |
|
|
Total count of true positives in the given class. |
|
|
Total count of false positives in the given class. |
|
|
Total count of true negatives in the given class. |
|
|
Total count of false negatives in the given class. |
|
|
The accuracy (ratio of correct predictions, both positive and negative, to the total number of predictions) for the given class. |
|
|
The support (true positives plus false negatives) for the given class. |
|
|
Contains error or warning messages. |