You are viewing documentation about an older version (1.0.9). View latest version

snowflake.ml.modeling.mixture.BayesianGaussianMixtureΒΆ

class snowflake.ml.modeling.mixture.BayesianGaussianMixture(*, n_components=1, covariance_type='full', tol=0.001, reg_covar=1e-06, max_iter=100, n_init=1, init_params='kmeans', weight_concentration_prior_type='dirichlet_process', weight_concentration_prior=None, mean_precision_prior=None, mean_prior=None, degrees_of_freedom_prior=None, covariance_prior=None, random_state=None, warm_start=False, verbose=0, verbose_interval=10, input_cols: Optional[Union[str, Iterable[str]]] = None, output_cols: Optional[Union[str, Iterable[str]]] = None, label_cols: Optional[Union[str, Iterable[str]]] = None, drop_input_cols: Optional[bool] = False, sample_weight_col: Optional[str] = None)ΒΆ

Bases: BaseTransformer

Variational Bayesian estimation of a Gaussian mixture For more details on this class, see sklearn.mixture.BayesianGaussianMixture

n_components: int, default=1

The number of mixture components. Depending on the data and the value of the weight_concentration_prior the model can decide to not use all the components by setting some component weights_ to values very close to zero. The number of effective components is therefore smaller than n_components.

covariance_type: {β€˜full’, β€˜tied’, β€˜diag’, β€˜spherical’}, default=’full’

String describing the type of covariance parameters to use. Must be one of:

'full' (each component has its own general covariance matrix),
'tied' (all components share the same general covariance matrix),
'diag' (each component has its own diagonal covariance matrix),
'spherical' (each component has its own single variance).
Copy
tol: float, default=1e-3

The convergence threshold. EM iterations will stop when the lower bound average gain on the likelihood (of the training data with respect to the model) is below this threshold.

reg_covar: float, default=1e-6

Non-negative regularization added to the diagonal of covariance. Allows to assure that the covariance matrices are all positive.

max_iter: int, default=100

The number of EM iterations to perform.

n_init: int, default=1

The number of initializations to perform. The result with the highest lower bound value on the likelihood is kept.

init_params: {β€˜kmeans’, β€˜k-means++’, β€˜random’, β€˜random_from_data’}, default=’kmeans’

The method used to initialize the weights, the means and the covariances. String must be one of:

β€˜kmeans’: responsibilities are initialized using kmeans. β€˜k-means++’: use the k-means++ method to initialize. β€˜random’: responsibilities are initialized randomly. β€˜random_from_data’: initial means are randomly selected data points.

weight_concentration_prior_type: {β€˜dirichlet_process’, β€˜dirichlet_distribution’}, default=’dirichlet_process’

String describing the type of the weight concentration prior.

weight_concentration_prior: float or None, default=None

The dirichlet concentration of each component on the weight distribution (Dirichlet). This is commonly called gamma in the literature. The higher concentration puts more mass in the center and will lead to more components being active, while a lower concentration parameter will lead to more mass at the edge of the mixture weights simplex. The value of the parameter must be greater than 0. If it is None, it’s set to 1. / n_components.

mean_precision_prior: float or None, default=None

The precision prior on the mean distribution (Gaussian). Controls the extent of where means can be placed. Larger values concentrate the cluster means around mean_prior. The value of the parameter must be greater than 0. If it is None, it is set to 1.

mean_prior: array-like, shape (n_features,), default=None

The prior on the mean distribution (Gaussian). If it is None, it is set to the mean of X.

degrees_of_freedom_prior: float or None, default=None

The prior of the number of degrees of freedom on the covariance distributions (Wishart). If it is None, it’s set to n_features.

covariance_prior: float or array-like, default=None

The prior on the covariance distribution (Wishart). If it is None, the emiprical covariance prior is initialized using the covariance of X. The shape depends on covariance_type:

(n_features, n_features) if 'full',
(n_features, n_features) if 'tied',
(n_features)             if 'diag',
float                    if 'spherical'
Copy
random_state: int, RandomState instance or None, default=None

Controls the random seed given to the method chosen to initialize the parameters (see init_params). In addition, it controls the generation of random samples from the fitted distribution (see the method sample). Pass an int for reproducible output across multiple function calls. See Glossary.

warm_start: bool, default=False

If β€˜warm_start’ is True, the solution of the last fitting is used as initialization for the next call of fit(). This can speed up convergence when fit is called several times on similar problems. See the Glossary.

verbose: int, default=0

Enable verbose output. If 1 then it prints the current initialization and each iteration step. If greater than 1 then it prints also the log probability and the time needed for each step.

verbose_interval: int, default=10

Number of iteration done before the next print.

input_cols: Optional[Union[str, List[str]]]

A string or list of strings representing column names that contain features. If this parameter is not specified, all columns in the input DataFrame except the columns specified by label_cols and sample-weight_col parameters are considered input columns.

label_cols: Optional[Union[str, List[str]]]

A string or list of strings representing column names that contain labels. This is a required param for estimators, as there is no way to infer these columns. If this parameter is not specified, then object is fitted without labels(Like a transformer).

output_cols: Optional[Union[str, List[str]]]

A string or list of strings representing column names that will store the output of predict and transform operations. The length of output_cols mus match the expected number of output columns from the specific estimator or transformer class used. If this parameter is not specified, output column names are derived by adding an OUTPUT_ prefix to the label column names. These inferred output column names work for estimator’s predict() method, but output_cols must be set explicitly for transformers.

sample_weight_col: Optional[str]

A string representing the column name containing the examples’ weights. This argument is only required when working with weighted datasets.

drop_input_cols: Optional[bool], default=False

If set, the response of predict(), transform() methods will not contain input columns.

Methods

fit(dataset)

Estimate model parameters with the EM algorithm For more details on this function, see sklearn.mixture.BayesianGaussianMixture.fit

predict(dataset)

Predict the labels for the data samples in X using trained model For more details on this function, see sklearn.mixture.BayesianGaussianMixture.predict

predict_proba(dataset[, output_cols_prefix])

Evaluate the components' density for each sample For more details on this function, see sklearn.mixture.BayesianGaussianMixture.predict_proba

score(dataset)

Compute the per-sample average log-likelihood of the given data X For more details on this function, see sklearn.mixture.BayesianGaussianMixture.score

set_input_cols(input_cols)

Input columns setter.

to_sklearn()

Get sklearn.mixture.BayesianGaussianMixture object.

Attributes

model_signatures

Returns model signature of current class.