snowflake.ml.modeling.decomposition.TruncatedSVD¶
- class snowflake.ml.modeling.decomposition.TruncatedSVD(*, n_components=2, algorithm='randomized', n_iter=5, n_oversamples=10, power_iteration_normalizer='auto', random_state=None, tol=0.0, input_cols: Optional[Union[str, Iterable[str]]] = None, output_cols: Optional[Union[str, Iterable[str]]] = None, label_cols: Optional[Union[str, Iterable[str]]] = None, drop_input_cols: Optional[bool] = False, sample_weight_col: Optional[str] = None)¶
Bases:
BaseTransformer
Dimensionality reduction using truncated SVD (aka LSA) For more details on this class, see sklearn.decomposition.TruncatedSVD
- n_components: int, default=2
Desired dimensionality of output data. If algorithm=’arpack’, must be strictly less than the number of features. If algorithm=’randomized’, must be less than or equal to the number of features. The default value is useful for visualisation. For LSA, a value of 100 is recommended.
- algorithm: {‘arpack’, ‘randomized’}, default=’randomized’
SVD solver to use. Either “arpack” for the ARPACK wrapper in SciPy (scipy.sparse.linalg.svds), or “randomized” for the randomized algorithm due to Halko (2009).
- n_iter: int, default=5
Number of iterations for randomized SVD solver. Not used by ARPACK. The default is larger than the default in
randomized_svd()
to handle sparse matrices that may have large slowly decaying spectrum.- n_oversamples: int, default=10
Number of oversamples for randomized SVD solver. Not used by ARPACK. See
randomized_svd()
for a complete description.- power_iteration_normalizer: {‘auto’, ‘QR’, ‘LU’, ‘none’}, default=’auto’
Power iteration normalizer for randomized SVD solver. Not used by ARPACK. See
randomized_svd()
for more details.- random_state: int, RandomState instance or None, default=None
Used during randomized svd. Pass an int for reproducible results across multiple function calls. See Glossary.
- tol: float, default=0.0
Tolerance for ARPACK. 0 means machine precision. Ignored by randomized SVD solver.
- input_cols: Optional[Union[str, List[str]]]
A string or list of strings representing column names that contain features. If this parameter is not specified, all columns in the input DataFrame except the columns specified by label_cols and sample-weight_col parameters are considered input columns.
- label_cols: Optional[Union[str, List[str]]]
A string or list of strings representing column names that contain labels. This is a required param for estimators, as there is no way to infer these columns. If this parameter is not specified, then object is fitted without labels(Like a transformer).
- output_cols: Optional[Union[str, List[str]]]
A string or list of strings representing column names that will store the output of predict and transform operations. The length of output_cols mus match the expected number of output columns from the specific estimator or transformer class used. If this parameter is not specified, output column names are derived by adding an OUTPUT_ prefix to the label column names. These inferred output column names work for estimator’s predict() method, but output_cols must be set explicitly for transformers.
- sample_weight_col: Optional[str]
A string representing the column name containing the examples’ weights. This argument is only required when working with weighted datasets.
- drop_input_cols: Optional[bool], default=False
If set, the response of predict(), transform() methods will not contain input columns.
Methods
fit
(dataset)Fit model on training data X For more details on this function, see sklearn.decomposition.TruncatedSVD.fit
score
(dataset)Method not supported for this class.
set_input_cols
(input_cols)Input columns setter.
to_sklearn
()Get sklearn.decomposition.TruncatedSVD object.
transform
(dataset)Perform dimensionality reduction on X For more details on this function, see sklearn.decomposition.TruncatedSVD.transform
Attributes
model_signatures
Returns model signature of current class.