Hyperparameter Tuner

The Snowflake ML hyperparameter tuning framework allows you to optimize machine learning model hyperparameters efficiently using various search algorithms.

Overview

The tuner provides a simple interface for hyperparameter optimization with support for:

  • Multiple search algorithms (Grid Search, Random Search, Bayesian Optimization)

  • Flexible parameter space definitions

  • Integration with Snowflake’s distributed computing infrastructure

  • Various sampling functions for continuous and discrete parameters

For a complete end-to-end example, see Complete Example.

Main Class

class snowflake.ml.modeling.tune.Tuner(train_func: Callable[[], None], search_space: Dict[str, SamplingFunction | float | int | str | bool | List[float | int | str | bool]], tuner_config: TunerConfig)

Bases: object

Hyperparameter tuning interface for machine learning models.

Example

Basic usage pattern:

>>> def train_func():
...     context = get_tuner_context()
...     params = context.get_hyper_params()
...     # Train your model with params
...     # Evaluate and report metrics
...     context.report(metrics={"accuracy": accuracy}, model=model)
>>>
>>> from snowflake.ml.modeling.tune import uniform, randint, TunerConfig
>>>
>>> search_space = {
...     "learning_rate": uniform(0.01, 0.1),
...     "n_estimators": randint(50, 200)
... }
>>> config = TunerConfig(metric="accuracy", mode="max", num_trials=10)
>>> tuner = Tuner(train_func, search_space, config)
>>> results = tuner.run(dataset_map=dataset_map)
Copy

Initialize the Tuner.

Parameters:
  • train_func – The training function to optimize. Should use get_tuner_context() to access hyperparameters and datasets, then call report() to report metrics. Must take no parameters and return None.

  • search_space

    Dictionary mapping parameter names to their search definitions.

    SearchSpaceValue = Union

    [SamplingFunction, float, int, str, bool, List[Union[float, int, str, bool]]]

    Each value can be one of:

    • Sampling functions: See SamplingFunction for available functions.

    • Lists for grid search: [0.01, 0.1, 0.2] or [‘relu’, ‘tanh’, ‘sigmoid’]

    • Single values: 0.1, 42, ‘adam’, True (fixed parameters)

    Example:

    from snowflake.ml.modeling.tune import uniform, choice
    
    search_space = {
        "learning_rate": uniform(0.01, 0.3),
        "optimizer": choice(['adam', 'sgd']),
        "epochs": [10, 20, 50]
    }
    
    Copy

  • tuner_config (TunerConfig) – Configuration specifying the optimization settings including metric, search algorithm, and number of trials.

run(dataset_map: Dict[str, DataConnector] | None = None) TunerResults

Execute the hyperparameter tuning process.

Runs the optimization process using the configured search algorithm to find the best hyperparameter configuration based on the specified metric.

Parameters:

dataset_map – (Optional[Dict[str, DataConnector]]): Optional mapping of dataset names to DataConnector objects. These datasets will be made available to the training function through the TunerContext.

Returns:

Results container with trial data, best configuration, and trained model.

See TunerResults for detailed field descriptions.

Return type:

TunerResults

Tip

Multi-Node HPO

For distributed hyperparameter optimization across multiple compute nodes:

from snowflake.ml.runtime_cluster import scale_cluster

# Scale cluster before running tuner
scale_cluster(2)  # Scale to 2 nodes for parallel trials

# Then run your tuner as normal
results = tuner.run(dataset_map={"train": your_data_connector})
Copy

This enables parallel execution of multiple trials simultaneously, reducing total time for large search spaces.

Configuration

class snowflake.ml.modeling.tune.TunerConfig(metric: str, mode: str, search_alg: ~entities.search_algorithm.SearchAlgorithm = <factory>, num_trials: int = 5, uses_snowflake_trainer: bool = False, max_concurrent_trials: int | None = None, resource_per_trial: dict | None = None)

Bases: object

Configuration class for the tuning process.

metric

The name of the metric to optimize. This must match exactly what you report in your training function via tuner_context.report(metrics={…}).

For example, if your training function calls: tuner_context.report(metrics={“accuracy”: 0.95, “loss”: 0.1})

Then valid metric names would be “accuracy” or “loss”.

Common examples: “accuracy”, “f1_score”, “mse”, “loss”, “auc”

Type:

str

mode

The optimization mode for the metric. Must be either “min” for minimization or “max” for maximization.

Use “max” for metrics where higher is better (accuracy, f1_score, auc). Use “min” for metrics where lower is better (loss, error, mse).

Type:

str

search_alg

The search algorithm to use for exploring the hyperparameter space. Defaults to random search.

Type:

SearchAlgorithm

num_trials

The maximum number of parameter configurations to try. Defaults to 5. Note: In a grid search, the num_trials parameter is set to 1, such that each unique parameter combination in the grid is evaluated with exactly one trial.

Type:

int

uses_snowflake_trainer

Specifies if the training function leverages a distributed Snowflake trainer (e.g., XGBEstimator, LightGBMEstimator, PyTorchTrainer). Defaults to False.

This flag is necessary because distributed trainers may require additional resources, and without this information, it’s impossible to determine the correct resources per trial.

Type:

bool

max_concurrent_trials

The maximum number of concurrently running trials per node. If not specified, it defaults to the total number of nodes in the cluster. This value must be a positive integer if provided.

Type:

Optional[int]

resource_per_trial

An optional dictionary specifying the resources allocated per trial.

Examples: - {‘CPU’: 1} reserves 1 CPU core per trial - {‘CPU’: 2, ‘GPU’: 1} reserves 2 CPU cores and 1 GPU per trial

Important: To enable GPU-based hyperparameter optimization, you must explicitly specify GPU resources (e.g., {‘GPU’: 1}). GPU resources are never automatically allocated - they must be explicitly requested.

When this parameter is not provided, the resource allocation per trial is inferred based on the max_concurrent_trials setting and total cluster resources.

Type:

Optional[dict]

Example

>>> from snowflake.ml.modeling.tune import  TunerConfig
>>> config = TunerConfig(
...     metric="accuracy",
...     mode="max",
...     num_trials=5,
... )
Copy

Context and Results

class snowflake.ml.modeling.tune.TunerContext(*, hyper_params: Dict[str, Any], progress_reporter: Callable[[Dict[str, Any], Any | None], None], dataset_map: Dict[str, DataConnector] | None = None)

Bases: object

A context class for managing configuration, reporting, and dataset information in hyperparameter tuning trials.

This class provides a centralized way to access hyperparameters, report metrics, and access datasets within a training function during hyperparameter optimization. Users should not create instances of this class directly; instead, use get_tuner_context() within your training function.

Example

>>> # Within your training function (see Tuner class for complete example)
>>> tuner_context = get_tuner_context()
>>>
>>> # Get current trial's hyperparameters
>>> config = tuner_context.get_hyper_params()  # {"learning_rate": 0.1, "n_estimators": 100}
>>>
>>> # Access datasets if provided to tuner.run()
>>> datasets = tuner_context.get_dataset_map()  # {"train": connector, "test": connector}
>>>
>>> # Report metrics and optionally save model
>>> tuner_context.report(metrics={"accuracy": 0.95}, model=trained_model)
Copy
get_dataset_map() Dict[str, DataConnector] | None

Retrieve the dataset mapping provided to tuner.run().

Returns the DataConnector objects that were passed to the tuner’s run() method via the dataset_map parameter. Use this to access your training, validation, and test datasets within your training function.

Returns:

A mapping of dataset names to DataConnector objects.

Keys are the names you specified in dataset_map (e.g., “train”, “test”, “validation”). Values are DataConnector objects that you can call .to_pandas() on to get the actual data. Returns None if no dataset_map was provided to tuner.run().

Return type:

Optional[Dict[str, DataConnector]]

Example

>>> datasets = tuner_context.get_dataset_map()
>>> if datasets:
...     train_df = datasets["train"].to_pandas()
...     test_df = datasets["test"].to_pandas()
... else:
...     # Handle case where no datasets were provided
...     train_df = load_your_data_somehow()
Copy
get_hyper_params() Dict[str, Any]

Retrieve the hyperparameter configuration for the current trial.

Returns the specific hyperparameter values selected by the search algorithm for this trial. The keys correspond to parameter names defined in your search_space, and values are the sampled/selected values to use for training.

Returns:

The hyperparameter configuration dictionary for this trial.

Keys are parameter names from your search_space. Values are the specific parameter values to use (e.g., learning_rate=0.1, n_estimators=150).

Return type:

Dict[str, Any]

Example

>>> config = tuner_context.get_hyper_params()
>>> # Example output: {"learning_rate": 0.05, "n_estimators": 120, "max_depth": 6}
>>> learning_rate = config["learning_rate"]
>>> n_estimators = config["n_estimators"]
Copy
report(metrics: Dict[str, Any], model: Any | None = None) None

Report trial results back to the hyperparameter optimization process.

This method is used to report the performance metrics of your trained model and optionally the model itself. The metrics are used by the search algorithm to select the next set of hyperparameters to try. The model from the best-performing trial will be available in the final TunerResults.

Parameters:
  • metrics (Dict[str, Any]) – A dictionary containing the performance metrics. Must include the metric specified in TunerConfig.metric (e.g., “accuracy”, “loss”). Can include additional metrics for analysis. Keys are metric names (strings), values should be numeric (float, int) for proper optimization.

  • model (Optional[Any], optional) – The trained model object to save with this trial. Currently, only picklable models are supported. The model from the best trial will be available as TunerResults.best_model. Defaults to None if you don’t need to save the model.

Example

>>> # Report just the required metric
>>> tuner_context.report(metrics={"accuracy": 0.95})
>>>
>>> # Report multiple metrics with model
>>> tuner_context.report(
...     metrics={"accuracy": 0.95, "f1_score": 0.93, "training_time": 45.2},
...     model=trained_model
... )
Copy

Note

The optimization process uses only the metric specified in TunerConfig.metric and TunerConfig.mode to determine the “best” trial. Additional metrics are stored for analysis but don’t affect the optimization process.

class snowflake.ml.modeling.tune.TunerResults(results: DataFrame, best_result: DataFrame, best_model: Any)

Bases: object

Results container for hyperparameter optimization runs.

This class contains all the information from a completed tuning run, including detailed results for all trials, the best configuration found, and the trained model from the best trial.

results

Complete results from all trials. Each row represents one trial with columns for hyperparameters (prefixed with ‘config/’) and reported metrics. Useful for analyzing optimization progress and comparing different configurations.

Type:

pd.DataFrame

best_result

Single-row DataFrame containing the trial that achieved the best performance according to the optimization metric. Contains the same columns as ‘results’ but filtered to just the best trial.

Type:

pd.DataFrame

best_model

The trained model object from the best performing trial. This is the actual model instance that was returned by tuner_context.report() in the training function. Can be used directly for predictions or further analysis.

Type:

Any

Example

>>> # After running tuner.run()
>>> results = tuner.run(dataset_map=dataset_map)
Copy
>>> # View all trial results
>>> print(results.results.head())
Copy
>>> # Access best configuration and performance
>>> best_config = results.best_result
Copy
>>> # Use the best model for predictions
>>> best_model = results.best_model
Copy

Note

The exact columns in ‘results’ and ‘best_result’ depend on:

  • The hyperparameters defined in your search_space (appear as ‘config/<param_name>’)

  • The metrics reported by your training function via tuner_context.report()

  • Additional metadata like trial execution time and iteration number

Sampling Functions

The following sampling functions define how hyperparameters are sampled during the search:

class snowflake.ml.modeling.tune.SamplingFunction

Bases: object

Base class for hyperparameter sampling functions.

Sampling functions define how hyperparameter values are sampled during the optimization process. Different sampling functions are appropriate for different types of parameters and search strategies.

class snowflake.ml.modeling.tune.Uniform(lower: float, upper: float)

Bases: SamplingFunction

Uniform distribution sampling for continuous parameters.

Samples values uniformly from the specified range [lower, upper]. This is appropriate for parameters where all values in the range are equally likely to be optimal.

Parameters:
  • lower (float) – Lower bound of the sampling range (inclusive).

  • upper (float) – Upper bound of the sampling range (exclusive).

Example

>>> from snowflake.ml.modeling.tune import uniform
>>>
>>> # Sample learning rate uniformly between 0.001 and 0.1
>>> learning_rate = uniform(0.001, 0.1)
>>>
>>> search_space = {
...     'learning_rate': learning_rate,
...     'dropout_rate': uniform(0.0, 0.5)
... }
Copy
class snowflake.ml.modeling.tune.LogUniform(lower: float, upper: float)

Bases: SamplingFunction

Log-uniform distribution sampling for exponentially-scaled parameters.

Samples values from a log-uniform distribution in the range [lower, upper]. This is appropriate for parameters that vary exponentially, such as learning rates, where orders of magnitude differences are meaningful.

Parameters:
  • lower (float) – Lower bound of the sampling range (inclusive).

  • upper (float) – Upper bound of the sampling range (exclusive).

Example

>>> from snowflake.ml.modeling.tune import loguniform
>>>
>>> # Sample learning rate log-uniformly between 1e-5 and 1e-1
>>> learning_rate = loguniform(1e-5, 1e-1)
>>>
>>> search_space = {
...     'learning_rate': learning_rate,
...     'weight_decay': loguniform(1e-6, 1e-2)
... }
Copy
class snowflake.ml.modeling.tune.RandInt(lower: int, upper: int)

Bases: SamplingFunction

Random integer sampling for discrete parameters.

Samples integer values uniformly from the specified range [lower, upper]. This is appropriate for discrete parameters like the number of layers, batch size, or other integer-valued hyperparameters.

Parameters:
  • lower (int) – Lower bound of the sampling range (inclusive).

  • upper (int) – Upper bound of the sampling range (exclusive).

Example

>>> from snowflake.ml.modeling.tune import randint
>>>
>>> search_space = {
...     'n_estimators': randint(50, 200),
...     'max_depth': randint(3, 10),
...     'batch_size': randint(16, 128)
... }
Copy

Search Algorithms

Choose from the following search algorithms to optimize your hyperparameters:

class snowflake.ml.modeling.tune.search.SearchAlgorithm

Bases: object

Base class for hyperparameter search algorithms.

This is the abstract base class that all search algorithms inherit from. It defines the interface for search algorithm implementations.

class snowflake.ml.modeling.tune.search.GridSearch

Bases: SearchAlgorithm

Grid search algorithm.

Exhaustively evaluates all possible combinations of hyperparameters in the search space. This guarantees finding the optimal configuration within the defined grid, but can be computationally expensive with many parameters or large parameter ranges.

Example

>>> from snowflake.ml.modeling.tune.search import GridSearch
>>>
>>> search_alg = GridSearch()
>>>
>>> # Define search space as lists for grid search; Following will evaluate 3 × 3 × 2 = 18 combinations
>>> search_space = {
...     'learning_rate': [0.01, 0.1, 0.2],
...     'max_depth': [3, 5, 7],
...     'n_estimators': [50, 100]
... }
Copy
class snowflake.ml.modeling.tune.search.RandomSearch(random_state: int | None = None)

Bases: SearchAlgorithm

Random search algorithm.

Randomly samples hyperparameter configurations from the defined search space. This is often a good baseline and can be surprisingly effective, especially in high-dimensional spaces where grid search becomes impractical.

Parameters:

random_state (Optional[int]) – Random seed for reproducible results. If None, results will vary between runs.

Example

>>> from snowflake.ml.modeling.tune.search import RandomSearch
>>>
>>> # Use random sampling
>>> search_alg = RandomSearch()
>>>
>>> # Use fixed seed for reproducibility
>>> search_alg = RandomSearch(random_state=42)
Copy
class snowflake.ml.modeling.tune.search.BayesOpt(utility_kwargs: Dict[str, Any] | None = None)

Bases: SearchAlgorithm

Bayesian Optimization search algorithm.

Uses Gaussian processes to model the objective function and intelligently select the next hyperparameter configuration to evaluate. This algorithm is particularly effective when evaluations are expensive and you want to minimize the number of trials needed to find good hyperparameters.

Parameters:

utility_kwargs (Optional[Dict[str, Any]]) – Optional keyword arguments to pass to the acquisition function. Common options include: - kind: Type of acquisition function (‘ucb’, ‘ei’, ‘poi’) - kappa: Exploration parameter for UCB - xi: Exploration parameter for EI/POI

Example

>>> from snowflake.ml.modeling.tune.search import BayesOpt
>>>
>>> # Use default settings
>>> search_alg = BayesOpt()
>>>
>>> # Customize acquisition function
>>> search_alg = BayesOpt(utility_kwargs={
...     'kind': 'ucb',
...     'kappa': 1.96  # Less exploration
... })
Copy

Complete Example

Here’s a complete example showing how to optimize XGBoost hyperparameters using the digits dataset:

Step 1: Prepare your data

import xgboost as xgb
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from snowflake.ml.data.data_connector import DataConnector
from snowflake.ml.modeling.tune import Tuner, TunerConfig, uniform, get_tuner_context
from snowflake.ml.modeling.tune.search import BayesOpt
from snowflake.snowpark.context import get_active_session

# Load and split data
session = get_active_session()
X, y = datasets.load_digits(return_X_y=True, as_frame=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train = X_train.assign(target=y_train).reset_index(drop=True)
X_test = X_test.assign(target=y_test).reset_index(drop=True)

# Create DataConnectors
dataset_map = {
    "train": DataConnector.from_dataframe(session.create_dataframe(X_train)),
    "test": DataConnector.from_dataframe(session.create_dataframe(X_test)),
}
Copy

Step 2: Define your training function

The training function gets hyperparameters from the tuner, trains a model, and reports results.

def train_func():
    tuner_context = get_tuner_context()
    config = tuner_context.get_hyper_params()
    dm = tuner_context.get_dataset_map()

    # Load data
    train_df = dm["train"].to_pandas()
    test_df = dm["test"].to_pandas()
    train_labels = train_df['"target"']
    train_features = train_df.drop(columns=['"target"'])
    test_labels = test_df['"target"']
    test_features = test_df.drop(columns=['"target"'])

    # Train with current hyperparameters
    model = xgb.XGBClassifier(
        **{k: int(v) if k != "learning_rate" else v for k, v in config.items()},
        random_state=42
    )
    model.fit(train_features, train_labels)

    # Evaluate and report
    accuracy = accuracy_score(test_labels, model.predict(test_features))
    tuner_context.report(metrics={"accuracy": accuracy}, model=model)
Copy

Step 3: Configure optimization

Define which hyperparameters to optimize and how to search the space.

search_space = {
    "n_estimators": uniform(50, 200),
    "max_depth": uniform(3, 10),
    "learning_rate": uniform(0.01, 0.3)
}
tuner_config = TunerConfig(
    metric="accuracy",
    mode="max",
    search_alg=BayesOpt(),
    num_trials=10
)
Copy

Step 4: Run optimization

tuner = Tuner(train_func, search_space, tuner_config)
results = tuner.run(dataset_map=dataset_map)
print(f"Best accuracy: {results.best_result['accuracy'].iloc[0]}")
print(f"Best parameters: {results.best_result}")
Copy