You are viewing documentation about an older version (1.0.9). View latest version

snowflake.ml.modeling.linear_model.Perceptron

class snowflake.ml.modeling.linear_model.Perceptron(*, penalty=None, alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, eta0=1.0, n_jobs=None, random_state=0, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, class_weight=None, warm_start=False, input_cols: Optional[Union[str, Iterable[str]]] = None, output_cols: Optional[Union[str, Iterable[str]]] = None, label_cols: Optional[Union[str, Iterable[str]]] = None, drop_input_cols: Optional[bool] = False, sample_weight_col: Optional[str] = None)

Bases: BaseTransformer

Linear perceptron classifier For more details on this class, see sklearn.linear_model.Perceptron

penalty: {‘l2’,’l1’,’elasticnet’}, default=None

The penalty (aka regularization term) to be used.

alpha: float, default=0.0001

Constant that multiplies the regularization term if regularization is used.

l1_ratio: float, default=0.15

The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1. l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1. Only used if penalty=’elasticnet’.

fit_intercept: bool, default=True

Whether the intercept should be estimated or not. If False, the data is assumed to be already centered.

max_iter: int, default=1000

The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit() method.

tol: float or None, default=1e-3

The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol).

shuffle: bool, default=True

Whether or not the training data should be shuffled after each epoch.

verbose: int, default=0

The verbosity level.

eta0: float, default=1

Constant by which the updates are multiplied.

n_jobs: int, default=None

The number of CPUs to use to do the OVA (One Versus All, for multi-class problems) computation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.

random_state: int, RandomState instance or None, default=0

Used to shuffle the training data, when shuffle is set to True. Pass an int for reproducible output across multiple function calls. See Glossary.

early_stopping: bool, default=False

Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a stratified fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs.

validation_fraction: float, default=0.1

The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True.

n_iter_no_change: int, default=5

Number of iterations with no improvement to wait before early stopping.

class_weight: dict, {class_label: weight} or “balanced”, default=None

Preset for the class_weight fit parameter.

Weights associated with classes. If not given, all classes are supposed to have weight one.

The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)).

warm_start: bool, default=False

When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.

input_cols: Optional[Union[str, List[str]]]

A string or list of strings representing column names that contain features. If this parameter is not specified, all columns in the input DataFrame except the columns specified by label_cols and sample-weight_col parameters are considered input columns.

label_cols: Optional[Union[str, List[str]]]

A string or list of strings representing column names that contain labels. This is a required param for estimators, as there is no way to infer these columns. If this parameter is not specified, then object is fitted without labels(Like a transformer).

output_cols: Optional[Union[str, List[str]]]

A string or list of strings representing column names that will store the output of predict and transform operations. The length of output_cols mus match the expected number of output columns from the specific estimator or transformer class used. If this parameter is not specified, output column names are derived by adding an OUTPUT_ prefix to the label column names. These inferred output column names work for estimator’s predict() method, but output_cols must be set explicitly for transformers.

sample_weight_col: Optional[str]

A string representing the column name containing the examples’ weights. This argument is only required when working with weighted datasets.

drop_input_cols: Optional[bool], default=False

If set, the response of predict(), transform() methods will not contain input columns.

Methods

decision_function(dataset[, output_cols_prefix])

Predict confidence scores for samples For more details on this function, see sklearn.linear_model.Perceptron.decision_function

fit(dataset)

Fit linear model with Stochastic Gradient Descent For more details on this function, see sklearn.linear_model.Perceptron.fit

predict(dataset)

Predict class labels for samples in X For more details on this function, see sklearn.linear_model.Perceptron.predict

score(dataset)

Return the mean accuracy on the given test data and labels For more details on this function, see sklearn.linear_model.Perceptron.score

set_input_cols(input_cols)

Input columns setter.

to_sklearn()

Get sklearn.linear_model.Perceptron object.

Attributes

model_signatures

Returns model signature of current class.