You are viewing documentation about an older version (1.0.9). View latest version

snowflake.ml.modeling.manifold.TSNEΒΆ

class snowflake.ml.modeling.manifold.TSNE(*, n_components=2, perplexity=30.0, early_exaggeration=12.0, learning_rate='auto', n_iter=1000, n_iter_without_progress=300, min_grad_norm=1e-07, metric='euclidean', metric_params=None, init='pca', verbose=0, random_state=None, method='barnes_hut', angle=0.5, n_jobs=None, input_cols: Optional[Union[str, Iterable[str]]] = None, output_cols: Optional[Union[str, Iterable[str]]] = None, label_cols: Optional[Union[str, Iterable[str]]] = None, drop_input_cols: Optional[bool] = False, sample_weight_col: Optional[str] = None)ΒΆ

Bases: BaseTransformer

T-distributed Stochastic Neighbor Embedding For more details on this class, see sklearn.manifold.TSNE

n_components: int, default=2

Dimension of the embedded space.

perplexity: float, default=30.0

The perplexity is related to the number of nearest neighbors that is used in other manifold learning algorithms. Larger datasets usually require a larger perplexity. Consider selecting a value between 5 and 50. Different values can result in significantly different results. The perplexity must be less than the number of samples.

early_exaggeration: float, default=12.0

Controls how tight natural clusters in the original space are in the embedded space and how much space will be between them. For larger values, the space between natural clusters will be larger in the embedded space. Again, the choice of this parameter is not very critical. If the cost function increases during initial optimization, the early exaggeration factor or the learning rate might be too high.

learning_rate: float or β€œauto”, default=”auto”

The learning rate for t-SNE is usually in the range [10.0, 1000.0]. If the learning rate is too high, the data may look like a β€˜ball’ with any point approximately equidistant from its nearest neighbours. If the learning rate is too low, most points may look compressed in a dense cloud with few outliers. If the cost function gets stuck in a bad local minimum increasing the learning rate may help. Note that many other t-SNE implementations (bhtsne, FIt-SNE, openTSNE, etc.) use a definition of learning_rate that is 4 times smaller than ours. So our learning_rate=200 corresponds to learning_rate=800 in those other implementations. The β€˜auto’ option sets the learning_rate to max(N / early_exaggeration / 4, 50) where N is the sample size, following [4] and [5].

n_iter: int, default=1000

Maximum number of iterations for the optimization. Should be at least 250.

n_iter_without_progress: int, default=300

Maximum number of iterations without progress before we abort the optimization, used after 250 initial iterations with early exaggeration. Note that progress is only checked every 50 iterations so this value is rounded to the next multiple of 50.

min_grad_norm: float, default=1e-7

If the gradient norm is below this threshold, the optimization will be stopped.

metric: str or callable, default=’euclidean’

The metric to use when calculating distance between instances in a feature array. If metric is a string, it must be one of the options allowed by scipy.spatial.distance.pdist for its metric parameter, or a metric listed in pairwise.PAIRWISE_DISTANCE_FUNCTIONS. If metric is β€œprecomputed”, X is assumed to be a distance matrix. Alternatively, if metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays from X as input and return a value indicating the distance between them. The default is β€œeuclidean” which is interpreted as squared euclidean distance.

metric_params: dict, default=None

Additional keyword arguments for the metric function.

init: {β€œrandom”, β€œpca”} or ndarray of shape (n_samples, n_components), default=”pca”

Initialization of embedding. PCA initialization cannot be used with precomputed distances and is usually more globally stable than random initialization.

verbose: int, default=0

Verbosity level.

random_state: int, RandomState instance or None, default=None

Determines the random number generator. Pass an int for reproducible results across multiple function calls. Note that different initializations might result in different local minima of the cost function. See Glossary.

method: {β€˜barnes_hut’, β€˜exact’}, default=’barnes_hut’

By default the gradient calculation algorithm uses Barnes-Hut approximation running in O(NlogN) time. method=’exact’ will run on the slower, but exact, algorithm in O(N^2) time. The exact algorithm should be used when nearest-neighbor errors need to be better than 3%. However, the exact method cannot scale to millions of examples.

angle: float, default=0.5

Only used if method=’barnes_hut’ This is the trade-off between speed and accuracy for Barnes-Hut T-SNE. β€˜angle’ is the angular size (referred to as theta in [3]) of a distant node as measured from a point. If this size is below β€˜angle’ then it is used as a summary node of all points contained within it. This method is not very sensitive to changes in this parameter in the range of 0.2 - 0.8. Angle less than 0.2 has quickly increasing computation time and angle greater 0.8 has quickly increasing error.

n_jobs: int, default=None

The number of parallel jobs to run for neighbors search. This parameter has no impact when metric="precomputed" or (metric="euclidean" and method="exact"). None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.

input_cols: Optional[Union[str, List[str]]]

A string or list of strings representing column names that contain features. If this parameter is not specified, all columns in the input DataFrame except the columns specified by label_cols and sample-weight_col parameters are considered input columns.

label_cols: Optional[Union[str, List[str]]]

A string or list of strings representing column names that contain labels. This is a required param for estimators, as there is no way to infer these columns. If this parameter is not specified, then object is fitted without labels(Like a transformer).

output_cols: Optional[Union[str, List[str]]]

A string or list of strings representing column names that will store the output of predict and transform operations. The length of output_cols mus match the expected number of output columns from the specific estimator or transformer class used. If this parameter is not specified, output column names are derived by adding an OUTPUT_ prefix to the label column names. These inferred output column names work for estimator’s predict() method, but output_cols must be set explicitly for transformers.

sample_weight_col: Optional[str]

A string representing the column name containing the examples’ weights. This argument is only required when working with weighted datasets.

drop_input_cols: Optional[bool], default=False

If set, the response of predict(), transform() methods will not contain input columns.

Methods

fit(dataset)

Fit X into an embedded space For more details on this function, see sklearn.manifold.TSNE.fit

score(dataset)

Method not supported for this class.

set_input_cols(input_cols)

Input columns setter.

to_sklearn()

Get sklearn.manifold.TSNE object.

Attributes

model_signatures

Returns model signature of current class.