# Using Third-Party Packages¶

Stages can be used to import third-party packages. You can also specify Anaconda packages to install when you create Python UDFs.

In this Topic:

## Importing Packages Through a Snowflake Stage¶

Snowflake stages can be used to import packages. You can bring in any Python code that follows guidelines defined in Limitations on Python UDFs. For more information, see Creating a Python UDF With Code Uploaded from a Stage.

## Using Third-Party Packages from Anaconda¶

For convenience, a number of popular open source third-party Python packages that are built and provided by Anaconda are made available to use out of the box inside Snowflake. There is no additional cost for the use of the Anaconda packages apart from Snowflake’s standard consumption-based pricing.

To view the list of third-party packages from Anaconda, see the Anaconda Snowflake channel.

To request the addition of new packages, go to the Snowflake Ideas page in the Snowflake Community. Select the Python Packages & Libraries category and check if someone has already submitted a request. If so, vote on it. Otherwise, click New Idea and submit your suggestion.

### Getting Started¶

Before you start using the packages provided by Anaconda inside Snowflake, you must acknowledge the Snowflake Third Party Terms by following the steps below.

Note

• The organization administrator (i.e. the user with the ORGADMIN system role) is required to complete the steps in this section. For more information, see Enabling the ORGADMIN Role for an Account. For general information about organizations, see Managing Your Snowflake Organization.

• To accept the terms, a user must have the following user properties set:

• First name

• Last name

• Email address

If the user properties are not set, the Snowflake UI displays a message telling you to update these properties before proceeding. A user administrator (i.e. user with the USERADMIN role) or a higher role, or another role with the OWNERSHIP privilege on your Snowflake user object, can add these details to your user profile. For more information, see Modifying Other User Properties.

1. Log into Snowsight, the Snowflake web interface.

2. Click the dropdown menu next to your login name, then click Switch Role » ORGADMIN to change to the organization administrator role.

3. Click Admin » Billing » Terms & Billing.

4. Scroll to the Anaconda section and click the Enable button. The Anaconda Packages (Preview Feature) dialog opens.

5. Click the link to review the Snowflake Third Party Terms.

6. If you agree to the terms, click the Acknowledge & Continue button.

### Displaying and Using Packages¶

#### Displaying Available Packages¶

You can display all packages available and their version information by querying the PACKAGES view in the Information Schema.

select * from information_schema.packages where language = 'python';


To display version information about a specific package, for example numpy, use this command:

select * from information_schema.packages where (package_name = 'numpy' and language = 'python');


Note

Some packages in the Anaconda Snowflake channel are not intended for use inside Snowflake UDFs because UDFs are executed within a restricted engine. For more information, see Following Good Security Practices.

When queries that call Python UDFs are executed inside a Snowflake warehouse, Anaconda packages are installed seamlessly and cached on the virtual warehouse on your behalf.

#### Displaying Imported Packages¶

You can display a list of the packages and modules a UDF or UDTF is using by executing the DESCRIBE FUNCTION command. Executing the DESCRIBE FUNCTION command for a UDF whose handler is implemented in Python returns the values of several properties, including a list of imported modules and packages, as well as installed packages, the function signature, and its return type.

When specifying the identifier for the UDF, be sure to include function parameter types, if any.

desc function stock_sale_average(varchar, number, number);


### Performance On Cold Warehouses¶

For more efficient resource management, newly provisioned virtual warehouses do not preinstall Anaconda packages. Instead, Anaconda packages are installed on-demand the first time a UDF is used. The packages are cached for future UDF execution on the same warehouse. The cache is dropped when the warehouse is suspended. This may result in slower performance the first time a UDF is used or after the warehouse is resumed. The additional latency could be approximately 30 seconds.

## Local Development and Testing¶

To help you create a conda environment on your local machine for development and testing, Anaconda has created a Snowflake channel which mirrors a subset of the packages and versions that are supported in the Snowflake Python UDF environment. You may use the Snowflake conda channel for local testing and development at no cost under the Supplemental Embedded Software Terms to Anaconda’s Terms of Service.

For example, to create a new conda environment locally using the Snowflake channel, type something like this on the command line:

conda create --name py38_env -c https://repo.anaconda.com/pkgs/snowflake python=3.8 numpy pandas


Note that because of platform differences, your local conda environment may not be exactly the same as the server environment.

## Best Practices¶

Within the create function statement, the package specification (for example, packages = ('numpy','pandas')) should only specify the top-level packages that the UDF is using directly. Anaconda performs dependency management of packages and will install the required dependencies automatically. Because of this, you should not specify dependency packages.

Anaconda will install the most up-to-date version of the package and its dependencies if you don’t specify a package version. Generally, it isn’t necessary to specify a particular package version. Note that version resolution is performed once, when the UDF is created using the create function command. After that, the resulting version resolution is frozen and the same set of packages will be used when this particular UDF executes.

For an example of how to use the package specification within the create function statement, see Using an Imported Package in an In-line Python UDF.

## Known Issues with Third-Party Packages¶

### Performance With Single Row Prediction¶

Some data science frameworks, such as Scikit-learn and TensorFlow, might be slow when doing single-row ML prediction.

To improve performance, do batch prediction instead of single-row prediction. To do this, convert the input dataset into Snowflake arrays and feed the arrays into the prediction code. The batching can be done with tabular Python UDTFs, the ARRAY_AGG function, Tabular Java UDFs (UDTFs), or Tabular JavaScript UDFs (UDTFs).

You can also use the Python UDF batch API, with which you can define Python functions that receive input rows in batches, on which machine learning or data science libraries are optimized to operate. For more information, see Python UDF Batch API.

### Downloading Data On Demand From Data Science Libraries¶

Some data science libraries, such as NLTK, Keras, and spaCy provide functionality to download additional corpora, data, or models on demand.

However, on-demand downloading does not work with Python UDFs due to Snowflake security constraints, which disable some capabilities, such as network access and writing to files.

To work around this issue, download the data to your local environment and then provide it to the UDF via a Snowflake stage.

### XGBoost¶

When using XGBoost in UDF or UDTF for parallel prediction or training, the concurrency for each XGBoost instance should be set to 1. This ensures that XGBoost is configured for optimal performance when executing in the Snowflake environment.

Examples:

import xgboost as xgb
model = xgb.Booster()
model.set_param('nthread', 1)
model.load_model(...)

import xgboost as xgb
model = xgb.XGBRegressor(n_jobs=1)


### TensorFlow/Keras¶

When using Tensorflow/Keras for prediction, use Model.predict_on_batch and not Model.predict.

Example:

import keras
model = keras.models.load_model(...)
model.predict_on_batch(np.array([input]))


### Joblib¶

When using joblib.Parallel in a Python UDF or stored procedure, the backend parameter must be set to threading. Other backends, such as multiprocessing or loky are not supported. Certain scikit-learn algorithms, which rely on the multiprocessing or loky backend for parallelism, currently may not use all available resources and might execute slower.

Back to top