You are viewing documentation about an older version (1.2.0). View latest version

snowflake.ml.modeling.preprocessing.MaxAbsScalerΒΆ

class snowflake.ml.modeling.preprocessing.MaxAbsScaler(*, input_cols: Optional[Union[str, Iterable[str]]] = None, output_cols: Optional[Union[str, Iterable[str]]] = None, passthrough_cols: Optional[Union[str, Iterable[str]]] = None, drop_input_cols: Optional[bool] = False)ΒΆ

Bases: BaseTransformer

Scale each feature by its maximum absolute value.

This transformer scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any sparsity.

Values must be of float type. Each feature is scaled and transformed individually such that the maximal absolute value of each feature in the dataset is 1.0. This scaler does not shift or center the data, preserving sparsity.

For more details on what this transformer does, see sklearn.preprocessing.MaxAbsScaler.

Args:
input_cols: Optional[Union[str, List[str]]], default=None

The name(s) of one or more columns in a DataFrame containing a feature to be scaled.

output_cols: Optional[Union[str, List[str]]], default=None

The name(s) of one or more columns in a DataFrame in which results will be stored. The number of columns specified must match the number of input columns.

passthrough_cols: Optional[Union[str, List[str]]], default=None

A string or a list of strings indicating column names to be excluded from any operations (such as train, transform, or inference). These specified column(s) will remain untouched throughout the process. This option is helpful in scenarios requiring automatic input_cols inference, but need to avoid using specific columns, like index columns, during training or inference.

drop_input_cols: Optional[bool], default=False

Remove input columns from output if set True. False by default.

Attributes:
scale_: Dict[str, float]

dict {column_name: value} or None. Per-feature relative scaling factor.

max_abs_: Dict[str, float]

dict {column_name: value} or None. Per-feature maximum absolute value.

Scale each feature by its maximum absolute value.

This transformer scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any sparsity.

Args:

input_cols: Single or multiple input columns. output_cols: Single or multiple output columns. passthrough_cols: A string or a list of strings indicating column names to be excluded from any

operations (such as train, transform, or inference). These specified column(s) will remain untouched throughout the process. This option is helful in scenarios requiring automatic input_cols inference, but need to avoid using specific columns, like index columns, during in training or inference.

drop_input_cols: Remove input columns from output if set True. False by default.

Attributes:
scale_: dict {column_name: value} or None

Per feature relative scaling of the data.

max_abs_: dict {column_name: value} or None

Per feature maximum absolute value.

Methods

fit(dataset: Union[DataFrame, DataFrame]) β†’ MaxAbsScalerΒΆ

Compute the maximum absolute value to be used for later scaling.

Validates the transformer arguments and derives the scaling factors and maximum absolute values from the data, making dictionaries of both available as attributes of the transformer instance (see Attributes).

Returns the transformer instance.

Args:

dataset: Input dataset.

Returns:

Return self as fitted scaler.

get_input_cols() β†’ List[str]ΒΆ

Input columns getter.

Returns:

Input columns.

get_label_cols() β†’ List[str]ΒΆ

Label column getter.

Returns:

Label column(s).

get_output_cols() β†’ List[str]ΒΆ

Output columns getter.

Returns:

Output columns.

get_params(deep: bool = True) β†’ Dict[str, Any]ΒΆ

Get parameters for this transformer.

Args:
deep: If True, will return the parameters for this transformer and

contained subobjects that are transformers.

Returns:

Parameter names mapped to their values.

get_passthrough_cols() β†’ List[str]ΒΆ

Passthrough columns getter.

Returns:

Passthrough column(s).

get_sample_weight_col() β†’ Optional[str]ΒΆ

Sample weight column getter.

Returns:

Sample weight column.

get_sklearn_args(default_sklearn_obj: Optional[object] = None, sklearn_initial_keywords: Optional[Union[str, Iterable[str]]] = None, sklearn_unused_keywords: Optional[Union[str, Iterable[str]]] = None, snowml_only_keywords: Optional[Union[str, Iterable[str]]] = None, sklearn_added_keyword_to_version_dict: Optional[Dict[str, str]] = None, sklearn_added_kwarg_value_to_version_dict: Optional[Dict[str, Dict[str, str]]] = None, sklearn_deprecated_keyword_to_version_dict: Optional[Dict[str, str]] = None, sklearn_removed_keyword_to_version_dict: Optional[Dict[str, str]] = None) β†’ Dict[str, Any]ΒΆ

Get sklearn keyword arguments.

This method enables modifying object parameters for special cases.

Args:
default_sklearn_obj: Sklearn object used to get default parameter values. Necessary when

sklearn_added_keyword_to_version_dict is provided.

sklearn_initial_keywords: Initial keywords in sklearn. sklearn_unused_keywords: Sklearn keywords that are unused in snowml. snowml_only_keywords: snowml only keywords not present in sklearn. sklearn_added_keyword_to_version_dict: Added keywords mapped to the sklearn versions in which they were

added.

sklearn_added_kwarg_value_to_version_dict: Added keyword argument values mapped to the sklearn versions

in which they were added.

sklearn_deprecated_keyword_to_version_dict: Deprecated keywords mapped to the sklearn versions in which

they were deprecated.

sklearn_removed_keyword_to_version_dict: Removed keywords mapped to the sklearn versions in which they

were removed.

Returns:

Sklearn parameter names mapped to their values.

set_drop_input_cols(drop_input_cols: Optional[bool] = False) β†’ NoneΒΆ
set_input_cols(input_cols: Optional[Union[str, Iterable[str]]]) β†’ BaseΒΆ

Input columns setter.

Args:

input_cols: A single input column or multiple input columns.

Returns:

self

set_label_cols(label_cols: Optional[Union[str, Iterable[str]]]) β†’ BaseΒΆ

Label column setter.

Args:

label_cols: A single label column or multiple label columns if multi task learning.

Returns:

self

set_output_cols(output_cols: Optional[Union[str, Iterable[str]]]) β†’ BaseΒΆ

Output columns setter.

Args:

output_cols: A single output column or multiple output columns.

Returns:

self

set_params(**params: Dict[str, Any]) β†’ NoneΒΆ

Set the parameters of this transformer.

The method works on simple transformers as well as on nested objects. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Args:

**params: Transformer parameter names mapped to their values.

Raises:

SnowflakeMLException: Invalid parameter keys.

set_passthrough_cols(passthrough_cols: Optional[Union[str, Iterable[str]]]) β†’ BaseΒΆ

Passthrough columns setter.

Args:
passthrough_cols: Column(s) that should not be used or modified by the estimator/transformer.

Estimator/Transformer just passthrough these columns without any modifications.

Returns:

self

set_sample_weight_col(sample_weight_col: Optional[str]) β†’ BaseΒΆ

Sample weight column setter.

Args:

sample_weight_col: A single column that represents sample weight.

Returns:

self

to_lightgbm() β†’ AnyΒΆ
to_sklearn() β†’ AnyΒΆ
to_xgboost() β†’ AnyΒΆ
transform(dataset: Union[DataFrame, DataFrame]) β†’ Union[DataFrame, DataFrame]ΒΆ

Scale the data.

Args:

dataset: Input dataset.

Returns:

Output dataset.