snowflake.ml.modeling.metrics.precision_recall_fscore_support

snowflake.ml.modeling.metrics.precision_recall_fscore_support(*, df: DataFrame, y_true_col_names: Union[str, List[str]], y_pred_col_names: Union[str, List[str]], beta: float = 1.0, labels: Optional[Union[_SupportsArray[dtype], _NestedSequence[_SupportsArray[dtype]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]] = None, pos_label: Union[str, int] = 1, average: Optional[str] = None, warn_for: Union[Tuple[str, ...], Set[str]] = ('precision', 'recall', 'f-score'), sample_weight_col_name: Optional[str] = None, zero_division: Union[str, int] = 'warn') Union[Tuple[float, float, float, None], Tuple[ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]]]]

Compute precision, recall, F-measure and support for each class.

The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label a negative sample as positive.

The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples.

The F-beta score can be interpreted as a weighted harmonic mean of the precision and recall, where an F-beta score reaches its best value at 1 and worst score at 0.

The F-beta score weights recall more than precision by a factor of beta. beta == 1.0 means recall and precision are equally important.

The support is the number of occurrences of each class in the y true column(s).

If pos_label is None and in binary classification, this function returns the average precision, recall and F-measure if average is one of 'micro', 'macro', 'weighted' or 'samples'.

Args:
df: snowpark.DataFrame

Input dataframe.

y_true_col_names: string or list of strings

Column name(s) representing actual values.

y_pred_col_names: string or list of strings

Column name(s) representing predicted values.

beta: float, default=1.0

The strength of recall versus precision in the F-score.

labels: list of labels, default=None

The set of labels to include when average != 'binary', and their order if average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in the y true and y pred columns are used in sorted order.

pos_label: string or integer, default=1

The class to report if average='binary' and the data is binary. If the data are multiclass or multilabel, this will be ignored; setting labels=[pos_label] and average != 'binary' will report scores for that label only.

average: {‘binary’, ‘micro’, ‘macro’, ‘samples’, ‘weighted’}, default=None

If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: 'binary'

Only report results for the class specified by pos_label. This is applicable only if targets (y true, y pred) are binary.

'micro'

Calculate metrics globally by counting the total true positives, false negatives and false positives.

'macro'

Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.

'weighted'

Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall.

'samples'

Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score()).

warn_for: tuple or set containing “precision”, “recall”, or “f-score”

This determines which warnings will be made in the case that this function is being used to return only one of its metrics.

sample_weight_col_name: string, default=None

Column name representing sample weights.

zero_division: “warn”, 0 or 1, default=”warn”
Sets the value to return when there is a zero division:
  • recall - when there are no positive labels

  • precision - when there are no positive predictions

  • f-score - both

If set to “warn”, this acts as 0, but warnings are also raised.

Returns:
Tuple containing following items
precision - float (if average is not None) or array of float, shape = [n_unique_labels]

Precision score.

recall - float (if average is not None) or array of float, shape = [n_unique_labels]

Recall score.

fbeta_score - float (if average is not None) or array of float, shape = [n_unique_labels]

F-beta score.

support - None (if average is not None) or array of int, shape = [n_unique_labels]

The number of occurrences of each label in the y true column(s).