Snowpark Container Services

About Snowpark Container Services

Snowpark Container Services is a fully managed container offering designed to facilitate the deployment, management, and scaling of containerized applications within the Snowflake ecosystem. This service enables users to run containerized workloads directly within Snowflake, ensuring that data doesn’t need to be moved out of the Snowflake environment for processing. Unlike traditional container orchestration platforms like Docker or Kubernetes, Snowpark Container Services offers an OCI runtime execution environment specifically optimized for Snowflake. This integration allows for the seamless execution of OCI images, leveraging Snowflake’s robust data platform.

As a fully managed service, Snowpark Container Services streamlines operational tasks. It handles the intricacies of container management, including security and configuration, in line with best practices. This ensures that users can focus on developing and deploying their applications without the overhead of managing the underlying infrastructure.

Snowpark Container Services is fully integrated with Snowflake. For example, your application can easily perform these tasks:

  • Connect to Snowflake and run SQL in a Snowflake virtual warehouse.

  • Access data files in a Snowflake stage.

  • Process data sent from SQL queries.

Snowpark Container Services is also integrated with third-party tools. It lets you use third-party clients (such as Docker) to easily upload your application images to Snowflake. Seamless integration makes it easier for teams to focus on building data applications.

You can run and scale your application container workloads across Snowflake regions and cloud platforms without the complexity of managing a control plane or worker nodes, and you have quick and easy access to your Snowflake data.

Snowpark Container Services unlocks a wide array of new functionality, including these features:

  • Create long-running services.

  • Use GPUs to boost the speed and processing capabilities of a system.

  • Write your application code in any language (for example, C++).

  • Use any libraries with your application.

All of this comes with Snowflake platform benefits, most notably ease-of-use, security, and governance features. And you now have a scalable, flexible compute layer next to the powerful Snowflake data layer without needing to move data off the platform.

How does it work?

To run containerized applications in Snowpark Container Services, in addition to working with the basic Snowflake objects, such as databases and warehouses, you work with these objects: image repository, compute pool, service, and job.

Snowflake offers image registry, an OCIv2 compliant service, for storing your images. This enables OCI clients (such as Docker CLI and SnowSQL) to access an image registry in your Snowflake account. Using these clients, you can upload your application images to a repository (a storage unit) in your Snowflake account. For more information, see Working with an image registry and repository.

After you upload your application image to a repository, you can run your application containers as a service or a job.

  • A service is long-running and, as with a web service, you explicitly stop it when it is no longer needed. If a service container exits (for whatever reason), Snowflake restarts that container. To create a service, such as a full stack web application, use the CREATE SERVICE command.

  • A job has a finite lifespan, similar to a stored procedure. When all containers exit, the job is done. Snowflake does not restart any job containers. To create a job, such as training a machine learning model with GPUs, use the EXECUTE SERVICE command.


    The Snowpark Container Services job feature is currently in private preview and is subject to Preview Terms at Contact your Snowflake representative for more information.

For more information, see Working with services and Working with jobs.

Your services and jobs run in a compute pool, which is a collection of one or more virtual machine (VM) nodes. You first create a compute pool using the CREATE COMPUTE POOL command, and then specify the compute pool when you create a service or a job. The required information to create a compute pool includes the machine type, the minimum number of nodes to launch the compute pool with, and the maximum number of nodes the compute pool can scale to. Some of the supported machine types provide GPU. For more information, see Working with compute pools.

A job runs independently to completion. However, a service runs continuously, and you can communicate with a service.

You can use service functions to communicate with a service from a SQL query. You can configure public endpoints to allow access with the service from outside Snowflake, with Snowflake-managed access control. Snowpark Container Services also supports service-to-service communications. For more information, see Using a service.

Available regions

Snowpark Container Services is currently available in all AWS commercial regions.

Private Preview for Azure and Google Cloud will be announced at a later date.

What’s next?

If you’re new to Snowpark Container Services, we suggest that you first explore the tutorials and then continue with other topics to learn more and create your own containerized applications. The following topics provide more information: