snowflake.ml.modeling.linear_model.SGDOneClassSVM

class snowflake.ml.modeling.linear_model.SGDOneClassSVM(*, nu=0.5, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, random_state=None, learning_rate='optimal', eta0=0.0, power_t=0.5, warm_start=False, average=False, input_cols: Optional[Union[str, Iterable[str]]] = None, output_cols: Optional[Union[str, Iterable[str]]] = None, label_cols: Optional[Union[str, Iterable[str]]] = None, passthrough_cols: Optional[Union[str, Iterable[str]]] = None, drop_input_cols: Optional[bool] = False, sample_weight_col: Optional[str] = None)

Bases: BaseTransformer

Solves linear One-Class SVM using Stochastic Gradient Descent For more details on this class, see sklearn.linear_model.SGDOneClassSVM

input_cols: Optional[Union[str, List[str]]]

A string or list of strings representing column names that contain features. If this parameter is not specified, all columns in the input DataFrame except the columns specified by label_cols, sample_weight_col, and passthrough_cols parameters are considered input columns. Input columns can also be set after initialization with the set_input_cols method.

label_cols: Optional[Union[str, List[str]]]

This parameter is optional and will be ignored during fit. It is present here for API consistency by convention.

output_cols: Optional[Union[str, List[str]]]

A string or list of strings representing column names that will store the output of predict and transform operations. The length of output_cols must match the expected number of output columns from the specific predictor or transformer class used. If you omit this parameter, output column names are derived by adding an OUTPUT_ prefix to the label column names for supervised estimators, or OUTPUT_<IDX>for unsupervised estimators. These inferred output column names work for predictors, but output_cols must be set explicitly for transformers. In general, explicitly specifying output column names is clearer, especially if you don’t specify the input column names. To transform in place, pass the same names for input_cols and output_cols. be set explicitly for transformers. Output columns can also be set after initialization with the set_output_cols method.

sample_weight_col: Optional[str]

A string representing the column name containing the sample weights. This argument is only required when working with weighted datasets. Sample weight column can also be set after initialization with the set_sample_weight_col method.

passthrough_cols: Optional[Union[str, List[str]]]

A string or a list of strings indicating column names to be excluded from any operations (such as train, transform, or inference). These specified column(s) will remain untouched throughout the process. This option is helpful in scenarios requiring automatic input_cols inference, but need to avoid using specific columns, like index columns, during training or inference. Passthrough columns can also be set after initialization with the set_passthrough_cols method.

drop_input_cols: Optional[bool], default=False

If set, the response of predict(), transform() methods will not contain input columns.

nu: float, default=0.5

The nu parameter of the One Class SVM: an upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Should be in the interval (0, 1]. By default 0.5 will be taken.

fit_intercept: bool, default=True

Whether the intercept should be estimated or not. Defaults to True.

max_iter: int, default=1000

The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit. Defaults to 1000.

tol: float or None, default=1e-3

The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol). Defaults to 1e-3.

shuffle: bool, default=True

Whether or not the training data should be shuffled after each epoch. Defaults to True.

verbose: int, default=0

The verbosity level.

random_state: int, RandomState instance or None, default=None

The seed of the pseudo random number generator to use when shuffling the data. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

learning_rate: {‘constant’, ‘optimal’, ‘invscaling’, ‘adaptive’}, default=’optimal’

The learning rate schedule to use with fit. (If using partial_fit, learning rate must be controlled directly).

  • ‘constant’: eta = eta0

  • ‘optimal’: eta = 1.0 / (alpha * (t + t0)) where t0 is chosen by a heuristic proposed by Leon Bottou.

  • ‘invscaling’: eta = eta0 / pow(t, power_t)

  • ‘adaptive’: eta = eta0, as long as the training keeps decreasing. Each time n_iter_no_change consecutive epochs fail to decrease the training loss by tol or fail to increase validation score by tol if early_stopping is True, the current learning rate is divided by 5.

eta0: float, default=0.0

The initial learning rate for the ‘constant’, ‘invscaling’ or ‘adaptive’ schedules. The default value is 0.0 as eta0 is not used by the default schedule ‘optimal’.

power_t: float, default=0.5

The exponent for inverse scaling learning rate [default 0.5].

warm_start: bool, default=False

When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.

Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. If a dynamic learning rate is used, the learning rate is adapted depending on the number of samples already seen. Calling fit resets this counter, while partial_fit will result in increasing the existing counter.

average: bool or int, default=False

When set to True, computes the averaged SGD weights and stores the result in the coef_ attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples.

Methods

decision_function(dataset[, output_cols_prefix])

Signed distance to the separating hyperplane For more details on this function, see sklearn.linear_model.SGDOneClassSVM.decision_function

fit(dataset)

Fit linear One-Class SVM with Stochastic Gradient Descent For more details on this function, see sklearn.linear_model.SGDOneClassSVM.fit

fit_predict(dataset)

Perform fit on X and returns labels for X For more details on this function, see sklearn.linear_model.SGDOneClassSVM.fit_predict

get_input_cols()

Input columns getter.

get_label_cols()

Label column getter.

get_output_cols()

Output columns getter.

get_params([deep])

Get parameters for this transformer.

get_passthrough_cols()

Passthrough columns getter.

get_sample_weight_col()

Sample weight column getter.

get_sklearn_args([default_sklearn_obj, ...])

Get sklearn keyword arguments.

predict(dataset)

Return labels (1 inlier, -1 outlier) of the samples For more details on this function, see sklearn.linear_model.SGDOneClassSVM.predict

set_drop_input_cols([drop_input_cols])

set_input_cols(input_cols)

Input columns setter.

set_label_cols(label_cols)

Label column setter.

set_output_cols(output_cols)

Output columns setter.

set_params(**params)

Set the parameters of this transformer.

set_passthrough_cols(passthrough_cols)

Passthrough columns setter.

set_sample_weight_col(sample_weight_col)

Sample weight column setter.

to_sklearn()

Get sklearn.linear_model.SGDOneClassSVM object.

Attributes

model_signatures

Returns model signature of current class.