Snowpark Container Services: Working with services¶
Snowpark Container Services enables you to easily deploy, manage, and scale containerized applications. After you create an application and upload the application image to a repository in your Snowflake account, you can run your application containers as a service.
A service represents Snowflake running your containerized application on a compute pool, which is a collection of virtual machine (VM) nodes. There are two types of services:
long running services. A long-running is like a web service that does not end automatically. After you create a service, Snowflake manages the running service. For example, if a service container stops, for whatever reason, Snowflake restarts that container so the service runs uninterrupted.
job services. A job service terminates when your code exits, similar to a stored procedure. When all containers exit, the job service is done.
Snowpark Container Services provides a set of SQL commands you can use to create and manage a service. These include:
Creating a service. CREATE SERVICE, EXECUTE JOB SERVICE
Altering a service. ALTER SERVICE, DROP SERVICE
Getting information about a service. SHOW SERVICES, DESCRIBE SERVICE
Starting services¶
The minimum information required to start a service includes:
A name: Name of the service.
A service specification: This specification provides Snowflake with the information needed to run your service. The specification is a YAML file.
A compute pool: Snowflake runs your service in the specified compute pool.
Create a long running service¶
Use CREATE SERVICE to create a long running service.
Create a service using an inline specification. In most cases, during development, you might choose inline specification, as shown:
CREATE SERVICE echo_service IN COMPUTE POOL tutorial_compute_pool FROM SPECIFICATION $$ spec: containers: - name: echo image: /tutorial_db/data_schema/tutorial_repository/my_echo_service_image:tutorial readinessProbe: port: 8000 path: /healthcheck endpoints: - name: echoendpoint port: 8000 public: true $$;
Create a service using stage information. When you deploy the service in a production environment, it’s advisable to apply the separation of concerns design principle and upload the specification to a stage, provide stage information CREATE SERVICE command, as shown:
CREATE SERVICE echo_service IN COMPUTE POOL tutorial_compute_pool FROM @tutorial_stage SPECIFICATION_FILE='echo_spec.yaml';
Execute a job service¶
Use EXECUTE JOB SERVICE to create a job service. By default this command runs synchronously, and returns a response after all containers of the job service exit. You can optionally specify the ASYNC
parameter to run the job service asynchronously.
Execute a job service using an inline specification:
EXECUTE JOB SERVICE IN COMPUTE POOL tutorial_compute_pool NAME = example_job_service FROM SPECIFICATION $$ spec: container: - name: main image: /tutorial_db/data_schema/tutorial_repository/my_job_image:latest env: SNOWFLAKE_WAREHOUSE: tutorial_warehouse args: - "--query=select current_time() as time,'hello'" - "--result_table=results" $$;
You can optionally execute this job asynchronously using the
ASYNC
property.EXECUTE JOB SERVICE IN COMPUTE POOL tutorial_compute_pool NAME = example_job_service ASYNC = TRUE FROM SPECIFICATION $$ ... $$;
Execute a job service using stage information:
EXECUTE JOB SERVICE IN COMPUTE POOL tutorial_compute_pool NAME = example_job_service FROM @tutorial_stage SPECIFICATION_FILE='my_job_spec.yaml';
Using specification templates¶
There are times you might want to create multiple services using the same specification but with different configurations. For example, you suppose that you define an environment variable in a service specification and you want to create multiple services using the same specification but different values for the environment variable.
Specification templates enable you to define variables for field values in the specification. When you create a service you provide values for these variables.
Using specification templates is a two-step process:
Create a specification using variables as values for various specification fields. Use the
{{ variable_name }}
syntax to specify these variables. For example, the following specification uses a variable named “tag_name” for the image tag name, so that you specify a different image tag for each service.spec: containers: - name: echo image: myorg-myacct.registry.snowflakecomputing.com/tutorial_db/data_schema/tutorial_repository/my_echo_service_image:{{ tag_name }} ... endpoints: …
Create a service by providing the specification template in a CREATE SERVICE command. You use SPECIFICATION_TEMPLATE or SPECIFICATION_TEMPLATE_FILE to specify the template. Use the USING parameter to specify the value of the variable. For example, the following statement uses a specification template from a Snowflake stage. The USING parameter sets the
tag_name
variable to the value'latest'
.CREATE SERVICE echo_service IN COMPUTE POOL tutorial_compute_pool FROM @STAGE SPECIFICATION_TEMPLATE_FILE='echo.yaml' USING (tag_name=>'latest');
Guidelines for defining variables in a specification¶
Use the
{{ variable_name }}
syntax to define variables as field values in the specification.These variables can have default values. To specify the default value, use the
default
function in the variable declaration. For example, the following specification defines two variables (character_name
andendpoint_name
) with default values.spec: containers: - name: echo image: <image_name> env: CHARACTER_NAME: {{ character_name | default('Bob') }} SERVER_PORT: 8085 endpoints: - name: {{ endpoint_name | default('echo-endpoint') }} port: 8085
In addition, you can specify an optional boolean parameter to the
default
function to indicate whether you want the default value used when a blank value is passed in for the variable. Consider this specification:spec: containers: - name: echo image: <image_name> env: CHARACTER_NAME: {{ character_name | default('Bob', false) }} SERVER_PORT: 8085 endpoints: - name: {{ endpoint_name | default('echo-endpoint', true) }} port: 8085
In the specification:
For the
character_name
variable, the boolean parameter is set tofalse
. Therefore, if the variable is set to an empty string value (‘’) to this parameter, the value remains blank; the default value (“Bob”) is not used.For the
echo_endpoint
variable, the boolean parameter is set totrue
. Therefore, if you pass a blank value to this parameter, the default value (“echo-endpoint”) is used.
By default, the boolean parameter for the
default
function isfalse
.
Guidelines for passing values for specification variables¶
Specify the USING parameter in the CREATE SERVICE command to provide values for variables. The general syntax for USING is:
USING( var_name=>var_value, [var_name=>var_value, ... ] );
where
var_name
is case sensitive and it should be a valid Snowflake identifier (see Identifier requirements).var_value
can be either an alphanumeric value or a valid JSON value.Examples:
-- Alphanumeric string and literal values USING(some_alphanumeric_var=>'blah123', some_int_var=>111, some_bool_var=>true, some_float_var=>-1.2) -- JSON string USING(some_json_var=>' "/path/file.txt" ') -- JSON map USING(env_values=>'{"SERVER_PORT": 8000, "CHARACTER_NAME": "Bob"}' ); -- JSON list USING (ARGS='["-n", 2]' );
The USING parameter in CREATE SERVICE must provide values for the specification variables (except the variables for which the specification provides default values). Otherwise, an error is returned.
Examples¶
These examples show creating services using specification templates. The CREATE SERVICE commands in these examples use inline specification.
Example 1: Provide simple values¶
In Tutorial 1 you create a service by providing an inline specification. The following example is a modified version of the same where the specification defines two variables: image_url
and SERVER_PORT
. Note that the SERVER_PORT
variable is repeated in three places. This has the added benefit of using variables that ensure all these fields that are expected to have the same value do have the same value.
CREATE SERVICE echo_service
IN COMPUTE POOL tutorial_compute_pool
MIN_INSTANCES=1
MAX_INSTANCES=1
FROM SPECIFICATION_TEMPLATE $$
spec:
containers:
- name: echo
image: {{ image_url }}
env:
SERVER_PORT: {{SERVER_PORT}}
CHARACTER_NAME: Bob
readinessProbe:
port: {{SERVER_PORT}}
path: /healthcheck
endpoints:
- name: echoendpoint
port: {{SERVER_PORT}}
public: true
$$
USING (image_url=>' "/tutorial_db/data_schema/tutorial_repository/my_echo_service_image:latest" ', SERVER_PORT=>8000 );
In this CREATE SERVICE command, the USING parameter provides values for the two specification variables. The image_url
value includes slashes and a colon. These are not alphanumeric characters. Therefore, the example wraps the value in double quotes to make it a valid JSON string value. The template specification expands the following specification:
spec:
containers:
- name: echo
image: /tutorial_db/data_schema/tutorial_repository/my_echo_service_image:latest
env:
SERVER_PORT: 8000
CHARACTER_NAME: Bob
readinessProbe:
port: 8000
path: /healthcheck
endpoints:
- name: echoendpoint
port: 8000
public: true
Example 2: Provide a JSON value¶
In Tutorial 1, the specification defines two environment variables (SERVER_PORT
and CHARACTER_NAME
) as shown:
spec:
containers:
- name: echo
image: /tutorial_db/data_schema/tutorial_repository/my_echo_service_image:latest
env:
SERVER_PORT: 8000
CHARACTER_NAME: Bob
…
You can templatize this specification by using a variable for the env
field. This lets you create multiple services with different values for the environment variables. The following CREATE SERVICE command uses a variable (env_values
) for the env field.
CREATE SERVICE echo_service
IN COMPUTE POOL tutorial_compute_pool
MIN_INSTANCES=1
MAX_INSTANCES=1
FROM SPECIFICATION_TEMPLATE $$
spec:
containers:
- name: echo
image: /tutorial_db/data_schema/tutorial_repository/my_echo_service_image:latest
env: {{env_values}}
readinessProbe:
port: {{SERVER_PORT}} #this and next tell SF to connect to port 8000
path: /healthcheck
endpoints:
- name: echoendpoint
port: {{SERVER_PORT}}
public: true
$$
USING (env_values=>'{"SERVER_PORT": 8000, "CHARACTER_NAME": "Bob"}' );
The USING parameter in CREATE SERVICE provides value for the env_values
variable. The value is a JSON map that provides values for both the environment variables.
Example 3: Provide list as variable value¶
In Tutorial 2, the specification includes the args
field that includes two arguments.
spec:
container:
- name: main
image: /tutorial_db/data_schema/tutorial_repository/my_job_image:latest
env:
SNOWFLAKE_WAREHOUSE: tutorial_warehouse
args:
- "--query=select current_time() as time,'hello'"
- "--result_table=results"
In a template version of the specification, you can provide these arguments as a JSON list as shown:
spec:
container:
- name: main
image: /tutorial_db/data_schema/tutorial_repository/my_job_image:latest
env:
SNOWFLAKE_WAREHOUSE: tutorial_warehouse
args: {{ARGS}}
$$
USING (ARGS=$$["--query=select current_time() as time,'hello'", "--result_table=results"]$$ );
Scaling services¶
By default, Snowflake runs one instance of the service in the specified compute pool. To manage heavy workloads, you can run multiple service instances by setting the MIN_INSTANCES and MAX_INSTANCES properties, which specify the minimum number of instances of the service to start with and the maximum instances Snowflake can scale to when needed.
Example
CREATE SERVICE echo_service
IN COMPUTE POOL tutorial_compute_pool
FROM @tutorial_stage
SPECIFICATION_FILE='echo_spec.yaml'
MIN_INSTANCES=2
MAX_INSTANCES=4;
When multiple service instances are running, Snowflake automatically provides a load balancer to distribute the incoming requests.
Snowflake does not consider the service to be READY until at least two instances are available. While the service is not ready, Snowflake blocks access to it, meaning that associated service functions or ingress requests are denied until readiness is confirmed.
In some cases, you might want Snowflake to consider the service ready (and forward incoming requests) even if fewer than the specified minimum instances are available. You can achieve this by setting the MIN_READY_INSTANCES property.
Consider another scenario: During maintenance or a rolling service upgrade, Snowflake might terminate one or more service instances. This could lead to fewer available instances than the specified MIN_INSTANCES, causing the service to not be in a READY state. In such cases, you might want to set MIN_READY_INSTANCES to a value smaller than MIN_INSTANCES to ensure the service can continue accepting requests.
Example
CREATE SERVICE echo_service
IN COMPUTE POOL tutorial_compute_pool
FROM @tutorial_stage
SPECIFICATION_FILE='echo_spec.yaml'
MIN_INSTANCES=2
MAX_INSTANCES=4
MIN_READY_INSTANCES=1;
For more information, see CREATE SERVICE.
Note
You cannot run more than one instance of a job service.
Enabling autoscaling¶
To configure Snowflake to autoscale the number of service instances running, follow these steps:
Specify the CPU and memory requirements for your service instance in the service specification file. For more information, see the container.resources field.
Example
resources: requests: cpu: <cpu-units>
When running the CREATE SERVICE command, set the MIN_INSTANCES and MAX_INSTANCES parameters. You can also use ALTER SERVICE to change these values. Autoscaling occurs when the specified MAX_INSTANCES is greater than MIN_INSTANCES.
Snowflake starts by creating the minimum number of service instances on the specified compute pool. Snowflake then scales up or scales down the number of service instances based on an 80% CPU usage threshold. Snowflake continuously monitors CPU utilization within the compute pool, aggregating the usage data from all currently running service instances.
When the aggregated CPU usage (across all service instances) surpasses 80%, Snowflake deploys an additional service instance within the compute pool. If the aggregated CPU usage falls below 80%, Snowflake scales down by removing a running service instance. Snowflake uses a five-minute stabilization window to prevent frequent scaling.
Note the following scaling behaviors:
The scaling of service instances is constrained by the MIN_INSTANCES and MAX_INSTANCES parameters configured for the service.
If scaling up is necessary and the compute pool nodes lack the necessary resource capacity to start up another service instance, compute pool autoscaling can be triggered. For more information, see Autoscaling of compute pool nodes.
If you specify the MAX_INSTANCES and MIN_INSTANCES parameters when creating a service but don’t specify the CPU and memory requirements for your service instance in the service specification file, no autoscaling occurs; Snowflake starts with the number of instances specified by the MIN_INSTANCES parameter and does not autoscale.
Modifying and dropping services¶
After creating a service:
Use the DROP SERVICE command to remove a service from a schema (Snowflake terminates all the service containers).
Use the ALTER SERVICE command to modify the service (for example, suspend or resume the service, change the number of instances running, and direct Snowflake to redeploy your service using a new service specification).
Note
You cannot alter a job service.
Service termination¶
When you suspend a service (ALTER SERVICE … SUSPEND) or drop a service (DROP SERVICE), Snowflake terminates all the service instances. Similarly, when you upgrade service code (ALTER SERVICE … <fromSpecification>), Snowflake applies rolling upgrades by terminating and redeploying one service instance at a time.
When terminating a service instance, Snowflake first sends a SIGTERM signal to each service container. The container has the option to process the signal and shut down gracefully with a 30-second window. Otherwise, after the grace period, Snowflake terminates all the processes in the container.
Updating service code and redeploying the service¶
After a service is created, use the ALTER SERVICE … <fromSpecification> command to update service code and redeploy the service.
You first upload modified application code to your image repository and then call ALTER SERVICE, either providing the service specification inline or specifying the path to a specification file in the Snowflake stage. For example:
ALTER SERVICE echo_service
FROM SPECIFICATION $$
spec:
…
…
$$;
Upon receiving the request, Snowflake redeploys the service using the new code.
When you run the CREATE SERVICE … <from-Specification> command, Snowflake records the specific version of the provided image. Snowflake deploys that same image version in the following scenarios, even if the image in the repository has been updated:
When a suspended service is resumed (using ALTER SERVICE … RESUME).
When autoscaling adds more service instances.
When service instances are restarted during cluster maintenance.
But if you call ALTER SERVICE … <fromSpecification>, that triggers Snowflake to use the latest version in the repository for that image.
If you are the service owner, the output of the DESCRIBE SERVICE command includes the service specification, which includes the image digest (the value of the sha256
field in the specification), as shown below:
spec:
containers:
- name: "echo"
image: "/tutorial_db/data_schema/tutorial_repository/my_echo_service_image:latest"
sha256: "@sha256:8d912284f935ecf6c4753f42016777e09e3893eed61218b2960f782ef2b367af"
env:
SERVER_PORT: "8000"
CHARACTER_NAME: "Bob"
readinessProbe:
port: 8000
path: "/healthcheck"
endpoints:
- name: "echoendpoint"
port: 8000
public: true
ALTER SERVICE can impact communications (see Using a service) with the service.
If ALTER SERVICE … <fromSpecification> removes an endpoint or removes relevant permissions required to use an endpoint (see serviceRoles in Specification Reference), access to the service will fail. For more information, see Using a Service.
While the upgrade is in progress, new connections might get routed to the new version. If the new service version is not backward compatible, it will disrupt any active service usage. For example, ongoing queries using a service function might fail.
Note
When updating service code that is part of a native app with containers, you can use the SYSTEM$WAIT_FOR_SERVICES system function to pause the native app setup script to allow for the services to upgrade completely. For more information, see Upgrade an app.
Monitoring rolling upgrades¶
When multiple service instances are running, Snowflake performs a rolling upgrade in descending order based on the ID of the service instances. Use the following commands to monitor service upgrades:
DESCRIBE SERVICE and SHOW SERVICES:
The
is_upgrading
column in the output shows is TRUE if the service is being upgraded.The
spec_digest
column in the output represents the spec digest of the current service specification. You can execute this command periodically; a change in thespec_digest
value indicates a service upgrade was triggered. Use the SHOW SERVICE INSTANCES IN SERVICE command to check whether all the instances have been upgraded to the latest version as explained below.
SHOW SERVICE INSTANCES IN SERVICE:
The
status
column in the output provides the status of each individual service instance while the rolling upgrade is in progress. During the upgrade, you will observe each service instance transition status, such as TERMINATING to PENDING, and PENDING to READY.During the service upgrade, the SHOW SERVICE INSTANCES IN SERVICE commands might return different values in the
spec_digest
output column from SHOW SERVICES, which always returns the latest spec digest. It simply indicates the service upgrade is in progress and service instances are still running the old version of the service.
Get information about services¶
You can use the these commands:
Use the DESCRIBE SERVICE command to retrieve the properties and status of a service.
Use the SHOW SERVICES command to list current services (including job services), for which you have permissions. For each service, the output provides properties and status of the service. By default, the output lists services in the current database and schema. You can alternatively specify any of the following scopes. For example:
List the services in the account, in a specific database, or in a specific schema: For example, use the IN ACCOUNT filter to list services in your Snowflake account, regardless of which database or schema the services belong to. This is useful if you have Snowflake services created in multiple databases and schemas in your account. Like all other commands, SHOW SERVICES IN ACCOUNTS is gated by privileges, returning only the services for which the role you are using has viewing permissions.
You can also specify IN DATABASE or IN SCHEMA to list the services in the current (or specified) database or schema.
List the services running in a compute pool: For example, use IN COMPUTE POOL filter to list the services running in a compute pool.
List the services that start with a prefix or that match a pattern: You can apply the LIKE and STARTS WITH filters to filter the services by name.
List job services. or exclude job services from the list: You can use SHOW JOB SERVICES or SHOW SERVICES EXCLUDE JOBS to list only job services or exclude job services.
You can also combine these options to customize the SHOW SERVICES output.
Use the SHOW SERVICE INSTANCES IN SERVICE command to retrieve properties of the service instances.
Use the SHOW SERVICE CONTAINERS IN SERVICE command to retrieve the properties and status of the service instances.
Monitoring services¶
Snowpark Container Services offers tools to monitor compute pools in your account and the services running on them. For more information, see Snowpark Container Services: Monitoring Services.
Managing access to service endpoints¶
The service owner role (the role that you use to create the service) has full access to the service and endpoints the service exposes. Other roles will need USAGE privilege on the endpoints to communicate with the service. For example,
The owner role of the client needs USAGE privilege on the endpoint. Client refers to a service function or a service making requests to endpoints of another service.
To create a service function referencing an endpoint, the user needs access to the endpoint. That is, the service function’s owner role needs USAGE privilege on the endpoint referenced in the CREATE FUNCTION.
In service-to-service communications, the owner role of the client service (that is calling the other service’s endpoint) needs the USAGE privilege on the endpoint.
A user making ingress requests from outside Snowflake to a public endpoint needs USAGE privilege on the endpoint.
To allow a role to access a service endpoint, you grant the following to that role:
USAGE privilege on the database and schema where the service is created.
Service role that has permission to access the endpoint (see GRANT SERVICE ROLE). A service role is a mechanism to grant privileges on service endpoints to other roles. You have these options:
Use the default service role: Snowflake defines a default service role (
ALL_ENDPOINTS_USAGE
) that grants the USAGE privilege on all endpoints the service exposes and grants this default service role to the service’s owner role. Thus, the owner role can access all the endpoints the service exposes. You can grant this default service role to other roles.Example: Suppose you create a service with a public endpoint (
echoendpoint
) as shown:use database my_db; use schema my_schema; create service my_service in compute pool tutorial_pool from specification $$ spec: containers: - name: echo image: /tutorial_db/data_schema/tutorial_repository/my_echo_service_image:latest endpoints: - name: echoendpoint port: 8000 public: true $$
To grant a role (
custom_role
) access to the endpoint, run the following commands:grant usage on database my_db to role custom_role; grant usage on schema my_schema to role custom_role; grant service role my_service!all_endpoints_usage to role custom_role;
Create a service role: Instead of granting privileges on all endpoints using the default service role, you can define one or more service roles in the service specification. Within the definition, indicate specific endpoints for which the role is granted the USAGE privilege. You can grant (or revoke) the service role to other roles using the GRANT SERVICE ROLE and REVOKE SERVICE ROLE commands. You can also use the SHOW ROLES IN SERVICE, SHOW GRANTS commands to display information about the grants.
Snowflake creates the service roles when you create a service and deletes them when you delete the service.
Creating custom service roles enables you to grant different access permissions for difference scenarios. For example, you can grant a service role permission to an endpoint for use with a service function. You might create another service role with permission to a public endpoint used with a web UI.
Example: Suppose you create a service with two public endpoint (
ep1
andep2
) and a service role (ep1_role
) with access to endpointep1
as shown:use database my_db; use schema my_schema; create service my_service in compute pool tutorial_pool from specification $$ spec: containers: - name: echo image: /tutorial_db/data_schema/tutorial_repository/my_echo_service_image:latest endpoints: - name: ep1 port: 8000 public: true - name: ep2 port: 8082 public: true serviceRoles: - name: ep1_role endpoints: - ep1 $$
To grant the role (
custom_role
) access only to the endpointep1
, run the following commands:grant usage on database my_db to role custom_role; grant usage on schema my_schema to role custom_role; grant service role my_service!ep1_role to role custom_role;
Note the following:
If you use the same role to create multiple services, because the owner role has access to all endpoints, those services can communicate with each other seamlessly without any extra configuration changes.
If a service has multiple containers, these containers can communicate with each other via localhost, and these communications are local within each service instance and not subject to role-based access control.
The following sections provide details. You can also try a tutorial (Configure and test service endpoint privileges) that provides step-by-step instructions to explore this feature.
Granting the USAGE privilege on all endpoints using the default service role¶
When you create a service (including job service), Snowflake also creates a default service role, named ALL_ENDPOINTS_USAGE
. This role has USAGE privilege on all endpoints the service exposes. You can grant other roles this default service role using the GRANT SERVICE ROLE command:
GRANT SERVICE ROLE my_echo_service_image!ALL_ENDPOINTS_USAGE TO ROLE some_other_role;
Users who are using some_other_role
have the USAGE privilege on all the service endpoints.
When you drop a service, Snowflake drops all the service roles (default service role and service roles defined in the service specification) associated with the service and voids all the service role grants.
Granting the USAGE privilege to specific endpoints using service roles defined in the specification¶
Use service roles to manage fine-grained access to service endpoints. You define the service roles, along with the list of endpoints they are granted USAGE privilege to, in the service specification.
Granting privilege on specific endpoints of a service is a two-step process:
Define a service role: Use a service specification to define a service role by providing a role name and a list of one or more endpoints for which you want to grant USAGE privilege. For example, in the following specification fragment, the top-level
serviceRoles
field defines two service roles, each with USAGE privilege on specific endpoints.spec: ... serviceRoles: # Optional list of service roles - name: <svc_role_name1> endpoints: # endpoints that role can access - <endpoint_name1> - <endpoint_name2> - name: <svc_role_name2> endpoints: - <endpoint_name3> - <endpoint_name4>
Grant the service role to other roles. Using the GRANT SERVICE ROLE command, you grant the service role to other roles (account roles, application roles, or database roles). For example:
GRANT SERVICE ROLE <service-name>!<svc_role_name1> TO ROLE <another-role>
Using a service¶
After creating a service, users in the same account (that created the service) can use any of the following three supported methods to use it. The user will need access to roles having the necessary privileges.
Use the service from a SQL query (Service function): You create a service function, a user-defined function (UDF) associated with a service, and use it in a SQL query to communicate with the service. For an example, see Tutorial 1.
Use the service from outside Snowflake (Ingress): You can declare one or more service endpoints as public to allow network ingress access to the service. For an example, see Tutorial 1.
Use service from another service (Service-to-service communications): Services can communicate with each other using Snowflake-assigned service DNS name for service-to-service communication For an example, see Tutorial 3.
Note
A job service runs like a job and terminates when done. Using service function or ingress to communicate with a job service is not supported.
You cannot associate a service function with any endpoint of a job service.
You cannot create a job service wit a specification that defines a public endpoint.
Service-to-service communications with job services are supported. That is, services and job services can communicate with each other.
The following sections provide details.
Service functions: Using a service from an SQL query¶
A service function is a user-defined function (UDF) you create using CREATE FUNCTION (Snowpark Container Services). However, instead of writing the UDF code directly, you associate the UDF with your service endpoint. Note that, you can associate a service function only with a service endpoint that supports the HTTP or HTTPS protocol.
For example, in Tutorial 1, you create a
service named echo_service
that exposes one endpoint (echoendoint) as defined in the service specification:
spec:
…
endpoints:
- name: echoendpoint
port: 8080
echoendpoint
is a user-friendly endpoint name that represents the
corresponding port (8080). To communicate with this service endpoint, you create
a service function by providing the SERVICE and ENDPOINT parameters as shown:
CREATE FUNCTION my_echo_udf (text varchar)
RETURNS varchar
SERVICE=echo_service
ENDPOINT=echoendpoint
AS '/echo';
The AS
parameter provides the HTTP path to the service code.
You get this path value from the service code. For example, the following code lines are from service.py
in Tutorial 1.
@app.post("/echo")
def echo():
...
When you invoke the service function, Snowflake directs the request to the associated service endpoint and path.
Note
A service function is used to communicate with a service, and not with a job. In other words, you can only associate a service (not a job) with a service function.
Specifying batch size when sending data to a service to increase concurrency¶
When you run multiple instances of your service, you can create a service function
by specifying the optional MAX_BATCH_ROWS
parameter
to limit the
batch size, the maximum rows that Snowflake sends in a batch to
the service. For example, suppose MAX_BATCH_ROWS
is 10 and you call
my_echo_udf
service function with 100 input rows. Snowflake
partitions the input rows into batches, with each batch having at most 10 rows,
and sends a series of requests to the service with the batch of rows in the
request body. Configuring batch size can help when processing takes nontrivial time, and
distributing rows across all available servers can also help.
You can use ALTER FUNCTION to alter a service function. The following ALTER FUNCTION command changes the service endpoint to which it associates and the batch size:
ALTER FUNCTION my_echo_udf(VARCHAR)
SET SERVICE=other_service
ENDPOINT=otherendpoint
MAX_BATCH_ROWS=100
Data exchange format¶
For data exchange between a service function and an application container,
Snowflake follows the same format that external functions
use (see Data Formats).
For example, suppose you have data rows stored in a table (input_table
):
"Alex", "2014-01-01 16:00:00"
"Steve", "2015-01-01 16:00:00"
…
To send this data to your service, you invoke the service function by passing these rows as parameters:
SELECT service_func(col1, col2) FROM input_table;
Snowflake sends a series of requests to the container, with batches of data rows in the request body in this format:
{
"data":[
[
0,
"Alex",
"2014-01-01 16:00:00"
],
[
1,
"Steve",
"2015-01-01 16:00:00"
],
…
[
<row_index>,
"<column1>",
"<column2>"
],
]
}
The container then returns the output in the following format:
{
"data":[
[0, "a"],
[1, "b"],
…
[ row_index, output_column1]
]
}
The example output shown assumes that the result is a one-column table with rows (“a”, “b” …).
When multiple service instances are running, you can create a service
function using the MAX_BATCH_ROWS
parameter to distribute the input
rows for processing across all available servers. For more information, see
Specifying batch size when sending data to a service to increase concurrency.
Privileges required to create and manage service functions¶
To create and manage service functions, a role needs the following privileges:
To create a service function: The current role must have the USAGE privilege on the service being referenced.
To alter a service function: You can alter a service function and associate it with another service. The current role must have the USAGE privilege on the new service.
To use a service function: The current role must have the USAGE privilege on the service function, and the service function owner role must have the USAGE privilege on the associated service.
The following example script shows how you might grant permission to use a service function:
USE ROLE service_owner;
GRANT USAGE ON service service_db.my_schema.my_service TO ROLE func_owner;
USE ROLE func_owner;
CREATE OR REPLACE test_udf(v VARCHAR)
RETURNS VARCHAR
SERVICE=service_db.my_schema.my_service
ENDPOINT=endpointname1
AS '/run';
SELECT test_udf(col1) FROM some_table;
ALTER FUNCTION test_udf(VARCHAR) SET
SERVICE = service_db.other_schema.other_service
ENDPOINT=anotherendpoint;
GRANT USAGE ON FUNCTION test_udf(varchar) TO ROLE func_user;
USE ROLE func_user;
SELECT my_test_udf('abcd');
Ingress: Using a service from outside Snowflake¶
A service can expose one or more endpoints as public to allow users to use the service from the public web. In this case, Snowflake manages access control. Note that ingress is allowed only with HTTP or HTTPS endpoints.
Mark the endpoint as public in your service specification file:
spec
...
endpoints
- name: <endpoint name>
port: <port number>
public: true
Public endpoint access from outside Snowflake and authentication¶
Not everyone can access the public endpoints exposed by a service. Only users in the same Snowflake account as the service and having USAGE privilege on the public endpoint can access the public endpoint. You can use service role to grant this privilege.
These users can access the public endpoint using a browser or programmatically. Snowflake uses OAuth to authenticate these requests:
Accessing a public endpoint by using a browser: When the user uses a browser to access a public endpoint, Snowflake provides an automatic redirect for user authentication. The user is required to sign in and, behind the scenes, the user sign-in generates an OAuth token from Snowflake. The OAuth token is then used to send a request to the service endpoint.
Accessing a public endpoint programmatically: You application can use key pair authentication to authenticate requests to the public endpoint. In your code, you generate a JSON Web Token (JWT) from the key pair, exchange the JWT token with Snowflake for an OAuth token, and then use the OAuth token to authenticate requests to the public endpoint of a service.
Tutorial 1 provides step-by-step instructions for you to test public endpoint access.
Key pair authentication as shown in Tutorial 1 is the recommended way to authenticate requests when accessing public endpoints. The following code can be used to authenticate as an alternative to using key pair; however, there is no guarantee the code will work with future versions of Snowflake Connector for Python. This Python code uses the Python connector to first generate a session token that represents your identity. The code then uses the session token to log in to the public endpoint.
import snowflake.connector import requests ctx = snowflake.connector.connect( user="<username>",# username password="<password>", # insert password here account="<orgname>-<acct-name>", session_parameters={ 'PYTHON_CONNECTOR_QUERY_RESULT_FORMAT': 'json' }) # Obtain a session token. token_data = ctx._rest._token_request('ISSUE') token_extract = token_data['data']['sessionToken'] # Create a request to the ingress endpoint with authz. token = f'\"{token_extract}\"' headers = {'Authorization': f'Snowflake Token={token}'} # Set this to the ingress endpoint URL for your service url = 'http://<ingress_url>' # Validate the connection. response = requests.get(f'{url}', headers=headers) print(response.text) # Insert your code to interact with the application hereIn the code:
If you don’t know your account information (
<orgname>-<acctname>
), see the Tutorial common setup.You can get the
ingress_url
of the public endpoint exposed by the service by using SHOW ENDPOINTS.
User-specific headers in ingress requests¶
When a request for an ingress endpoint arrives, Snowflake automatically passes the following header along with the HTTP request to the container.
Sf-Context-Current-User: <user_name>
Your container code can optionally read the header, know who the caller is, and apply context-specific customization for different users. In addition, Snowflake can optionally include the Sf-Context-Current-User-Email
header. To include this header, contact Snowflake Support.
Service-to-service communications¶
Services can communicate with each other using the DNS name that Snowflake automatically assigns to each service. For an example, see Tutorial 3. Note that if a service endpoint is created only to allow service-to-service communications, the TCP protocol should be used.
The DNS name format is:
<service-name>.<schema-name>.<db-name>.snowflakecomputing.internal
Use SHOW SERVICES (or DESCRIBE SERVICE) to get the DNS name of a service.
The preceding DNS name is a full name. Services created in the same schema can
communicate using just the <service-name>
. Services that are in the same database
but different schemas must provide the schema name, such as <service-name>.<schema-name>
.
Snowflake allows network communications between services created by the same role and blocks network communications between services created by different roles. If you want to prevent your services from communicating with each other (for reasons such as security), use different Snowflake roles to create those services.
DNS names have the following limitations:
Your database, schema, or service names must be valid DNS labels. (See also https://www.ietf.org/rfc/rfc1035.html#section-2.3.1). Otherwise, creating a service will fail.
Snowflake replaces an underscore (_) in the names (database, schema, and service name) by a dash (-) in the DNS name.
After creating a service, do not change the database or the schema name, because Snowflake will not update the DNS name of the service.
A DNS name is only for internal communications within Snowflake between services running in the same account. It is not accessible from the internet.
Privileges¶
Privilege |
Usage |
Notes |
---|---|---|
USAGE |
To communicate with a service you need the USAGE privilege on the service endpoint. Required for creating a service function, using public endpoints, and connecting from another service. |
|
MONITOR |
To monitor a service and get runtime status. |
|
OPERATE |
To suspend or resume a service. |
|
OWNERSHIP |
Full control over the service. Only a single role can hold this privilege on a specific object at a time. |
|
ALL [ PRIVILEGES ] |
Grants all privileges, except OWNERSHIP, on the service. |
Guidelines and limitations¶
For more information, see Guidelines and limitations.