Categories:

Model monitor functions

MODEL_MONITOR_PERFORMANCE_METRIC¶

Gets performance metrics from a model monitor. Each model monitor monitors one machine learning model.

See also:

Querying monitoring results for more information.

Syntax¶

MODEL_MONITOR_PERFORMANCE_METRIC(<model_monitor_name>, <performance_metric_name>,
    [, <granularity> [, <start_time>  [, <end_time> ] ] ] )
Copy

Arguments¶

Required:

MODEL_MONITOR_NAME

Name of the model monitor used to compute the metric.

Valid values:

A string that’s the name of the model monitor. It can be a simple or fully qualified name.

METRIC_NAME

Name of the performance metric.

Valid values if the model monitor is attached to a regression model:

  • 'RMSE'

  • 'MAE'

  • 'MAPE'

  • 'MSE'

Valid values if the model monitor is attached to a binary classification model:

  • 'ROC_AUC'

  • 'CLASSIFICATION_ACCURACY'

  • 'PRECISION'

  • 'RECALL'

  • 'F1_SCORE'

Optional:

GRANULARITY

Granularity of the time range being queried. The default value is 1 DAY.

Valid values:

  • '<num> DAY'

  • '<num> WEEK'

  • '<num> MONTH'

  • '<num> QUARTER'

  • '<num> YEAR'

  • 'ALL'

  • NULL

START_TIME

Start of the time range used to compute the metric. The default value is 60 days before the current time, and is calculated each time you call the function.

Valid values:

A timestamp expression or NULL.

END_TIME

End of the time range used to compute the metric. The default value is the current time, and is calculated each time you call the function.

Valid values:

A timestamp expression or NULL.

Returns¶

Column

Description

Example values

EVENT_TIMESTAMP

Timestamp at the start of the time range.

2024-01-01 00:00:00.000

METRIC_VALUE

Value of the metric within the specified time range.

0.5

COUNT_USED

Number of records used to compute the metric.

100

COUNT_UNUSED

Number of records excluded from the metric computation.

10

METRIC_NAME

Name of the metric that has been computed.

ROC_AUC

Usage Notes¶

If value you’ve specified for model_monitor_name is case-sensitive or contains special characters or spaces, enclose it in double quotes. You must enclose the double quotes within single quotes. For example, '"<example_model_monitor_name>"'.

If you don’t use double-quotes, the model_monitor_name is assumed to be case-insensitive.

General requirements¶

  • The model monitor must be associated with a model that supports the requested metric type.

  • The model monitor must contain the necessary data for each metric type, as described below.

Metric requirements¶

The following are the required columns to get regression metrics:

  • RMSE: Requires the prediction_score and actual_score columns

  • MAE: Requires the prediction_score and actual_score columns

  • MAPE: Requires the prediction_score and actual_score columns

The following are the required columns to get binary classification metrics:

  • ROC_AUC: Requires the prediction_score and actual_class columns

  • CLASSIFICATION_ACCURACY: Requires the prediction_class and actual_class columns

  • PRECISION: Requires the prediction_class and actual_class columns

  • RECALL: Requires the prediction_class and actual_class columns

  • F1_SCORE: Requires the prediction_class and actual_class columns

The following are the required columns to get multiclass classification metrics:

  • CLASSIFICATION_ACCURACY: Requires the prediction_class and actual_class columns

  • MACRO_AVERAGE_PRECISION: Requires the prediction_class and actual_class columns

  • MACRO_AVERAGE_RECALL: Requires the prediction_class and actual_class columns

  • MICRO_AVERAGE_PRECISION: Requires the prediction_class and actual_class columns

  • MICRO_AVERAGE_RECALL: Requires the prediction_class and actual_class columns

Note

For binary classification, you can use micro-average precision and recall metrics similarly to how you use classification accuracy in multi-class classification.

Error cases¶

You might run into errors if you do the following:

  • Request an accuracy metric without setting the corresponding prediction or actual column.

  • Fail to provide data in the actual_score or actual_class column.

Examples¶

The following example gets the Root Mean Square Error (RMSE) over a one-day period from the model monitor.

SELECT * FROM TABLE(MODEL_MONITOR_PERFORMANCE_METRIC(
'MY_MONITOR', 'RMSE', '1 DAY', TO_TIMESTAMP_TZ('2024-01-01'), TO_TIMESTAMP_TZ('2024-01-02'))
)
Copy

The following example gets the Root Mean Square Error (RMSE) over the last 30 days from the model monitor:

SELECT * FROM TABLE(MODEL_MONITOR_PERFORMANCE_METRIC(
'MY_MONITOR', 'RMSE', '1 DAY', DATEADD('DAY', -30, CURRENT_DATE()), CURRENT_DATE())
)
Copy