<model_ name>!SHOW_ THRESHOLD_ METRICS¶
Returns raw counts and metrics for a specific threshold for each class in models where evaluation was enabled at instantiation. This method takes no arguments. See Metrics in `show_threshold_metrics`.
Output¶
| Column | Type | Description |
|---|---|---|
dataset_type | VARCHAR | The name of the dataset used for metrics calculation, currently EVAL. |
class | VARCHAR | The predicted class. Each class has its own set of metrics, which are provided in multiple rows. |
threshold | FLOAT | Threshold used to generate predictions. |
precision | FLOAT | Precision for the given class. The ratio of true positives to the total predicted positives. |
recall | FLOAT | Recall for the given class. Also called “sensitivity.” The ratio of true positives to the total actual positives. |
f1 | FLOAT | F1 score for the given class. |
tpr | FLOAT | True positive rate for the given class. |
fpr | FLOAT | False positive rate for the given class. |
tp | INTEGER | Total count of true positives in the given class. |
fp | INTEGER | Total count of false positives in the given class. |
tn | INTEGER | Total count of true negatives in the given class. |
fn | INTEGER | Total count of false negatives in the given class. |
accuracy | FLOAT | The accuracy (ratio of correct predictions, both positive and negative, to the total number of predictions) for the given class. |
support | INTEGER | The support (true positives plus false negatives) for the given class. |
logs | VARIANT | Contains error or warning messages. |