Bring your own model types via serialized files¶
The model registry supports logging built-in model types directly in the registry.
We also provide a method of logging other model types with snowflake.ml.model.custom_model.CustomModel. Serializable models trained using external tools or obtained from open source repositories can be used with CustomModel.
This guide explains how to:
Create a custom model.
Create model context with files and model objects.
Include additional code with your model using
code_paths.Log the custom model to the Snowflake Model Registry.
Deploy the model for inference.
Note
This quickstart provides an example of logging a custom PyCaret model.
Defining model context by keyword arguments¶
The snowflake.ml.model.custom_model.ModelContext can be instantiated with user-defined keyword arguments. The values can either be string file paths or instances of supported model types. The files and serialized models will be packaged with the model for use in the model inference logic.
Using in-memory model objects¶
When working with built-in model types, the recommended approach is to pass in-memory model objects directly to the ModelContext. This allows Snowflake ML to handle serialization automatically.
Note
In your custom model class, always access model objects through the model context. For example, use self.model = self.context['my_model']
instead of directly assigning self.model = model (where model is an in-memory model object). Accessing the model
directly captures a second copy of the model in a closure, which results in significantly larger model files during serialization.
Using serialized files¶
For models or data that are stored in serialized files like Python pickles or JSON, you can provide file paths to your ModelContext. Files can be serialized models, configuration files, or files containing parameters. This is useful when working with pre-trained models saved to disk or configuration data.
Important
When you combine a supported model type (such as XGBoost) with unsupported models or data, you don’t need to
serialize the supported model yourself. Set the supported model object directly in the context (e.g., base_model =
my_xgb_model) and it is serialized automatically.
Defining inference parameters¶
Custom model inference methods can accept optional parameters that control inference behavior, such as a temperature
setting or maximum number of tokens. Define parameters as keyword-only arguments (after *) on the
@inference_api method, with type annotations and default values.
When this model is logged, the parameters are automatically included in the model signature. Callers can override them at inference time, or omit them to use the defaults. For more information, see Specifying model signatures.
The following requirements apply to inference parameters:
They must be keyword-only (defined after
*in the method signature).They must have a type annotation. Supported types are
int,float,str,bool,bytes,datetime.datetime, andlistwith a supported element type (for example,list[str],list[list[int]]).They must have a default value.
Testing and logging a custom model¶
You can test a custom model by running it locally.
When the model works as intended, log it to the Snowflake Model Registry. As shown in the next code
example, provide conda_dependencies (or pip_requirements) to specify the libraries that the model class needs.
Provide sample_input_data (a pandas or Snowpark DataFrame) to infer the input signature for the model. Alternatively,
provide a model signature.
Including additional code with code_paths¶
Use the code_paths parameter in Registry.log_model to
package Python code, such as helper modules, utilities, and configuration files with your model. You can import this code just as you would locally.
You can either provide string paths to copy files or directories, or CodePath objects. The objects provide more control over which subdirectories or files are included, and the import paths that will be used by the model.
Using string paths¶
Pass a list of string paths to include files or directories. The last component of each path becomes the importable module name.
Using CodePath with filter¶
Use the CodePath class when you want to package only part of a directory tree
or control the import paths used by your model.
A CodePath has two parameters:
root: A directory or file path.filter(optional): A relative path underrootthat selects a subdirectory or file.
When filter is provided, the source is root/filter, and the filter value determines the import path.
For example, filter="utils" allows you to import utils, and filter="pkg/subpkg" allows you to
import pkg.subpkg.
Example: Given this project structure:
To package only utils/ and models/, excluding tests/:
You can also filter a single file:
Example: Logging a PyCaret model¶
The following example uses PyCaret to log a custom model type. PyCaret is a low-code, high-efficiency third-party package that Snowflake doesn’t support natively. You can bring your own model types using similar methods.
Step 1: Define the model context¶
Before you log your model, define the model context. The model context refers to your own custom model type.
The following example specifies the path to the serialized (pickled) model using the context’s model_file attribute. You can choose any
name for the attribute as long as the name is not used for anything else.
Step 2: Create a custom model class¶
Define a custom model class to log a model type without native support. In this example, a PyCaretModel class,
derived from CustomModel, is defined so the model can be logged in the registry.
Note
As shown, set the model’s memory directory to /tmp/. Snowflake’s warehouse nodes have restricted directory
access. /tmp is always writeable and is a safe choice when the model needs a place to write files. This might
not be necessary for other types of models.
Step 3: Test the custom model¶
Test the PyCaret model locally using code like the following.
Step 4: Define a model signature¶
In this example, use the sample data to infer a model signature for input validation:
Step 5: Log the model¶
The following code logs (registers) the model in the Snowflake Model Registry.
Step 6: Verify the model in the registry¶
To verify that the model is available in the Model Registry, use show_models function.
Step 7: Make predictions with the registered model¶
Use the run function to call the model for prediction.
Next Steps¶
After deploying a PyCaret model by way of the Snowflake Model Registry, you can view the model in Snowsight. In the navigation menu, select AI & ML » Models. If you do not see it there, make sure you are using the ACCOUNTADMIN role or the role you used to log the model.
To use the model from SQL, use SQL like the following: