Examples and Quickstarts

This topic contains several examples and quickstarts for common use cases for model logging and model inference in Snowflake ML. You can use these examples as a starting point for your own use case.

Beginner Quickstart

Getting started with Snowflake ML: train an xgboost regression model, log to model registry, and run inference in a Warehouse.

Quickstart

xgboost model, CPU inference in Snowpark Container Services

This code illustrates the key steps in deploying an XGBoost model in Snowpark Container Services (SPCS), then using the deployed model for inference.

Example

Log a pipeline with custom preprocessing and model training

This example illustrates how to:

  • Perform feature engineering

  • Train a pipeline with custom preprocessing steps and an xgboost forecasting model

  • Run hyperparameter optimization

  • Log the optimum pipeline

  • Run inference in a arehouse or in Snowpark Container Services (SPCS)

Example Notebook

Large scale open source embeddings model, GPU inference

This example uses Snowflake Notebooks on Container Runtime to train a large-scale embeddings model from the Hugging Face sentence_transformer library and run large scale predictions using GPUs on Snowpark Container Services (SPCS).

Quickstart

Complete pipeline with distributed PyTorch recommender model, GPU inference

This example shows how to build an end-to-end distributed Pytorch recommender model using GPUs, deploying the model for GPU inference on Snowpark Container Services (SPCS).

Quickstart

Bring an existing model trained externally (eg. AWS Sagemaker/Azure ML/GCP Vertex AI) to Snowflake

These examples show how to bring your existing model in AWS Sagemaker, Azure ML, or GCP Vertex AI to Snowflake (see blog post for more details).

Bring an MLFlow PyFunc model to Snowflake

This example shows how to log an MLFlow PyFunc model in the Snowflake Model Registry and run inference.

Example

Log a partitioned forecasting model for training and inference

This example shows how to log a forecasting model for running partitioned training and inference in Snowflake.

Quickstart

Log many-models as a collection for running partitioned inference at scale

This example shows how to log thousands of models as a custom partitioned model for running distributed, partitioned inference.

Quickstart