A task can execute any one of the following types of SQL code:

Tasks can be combined with table streams for continuous ELT workflows to process recently changed table rows. Streams ensure exactly once semantics for new or changed data in a table.

Tasks can also be used independently to generate periodic reports by inserting or merging rows into a report table or perform other periodic work.

In this Topic:

## Compute Resources¶

Tasks require compute resources to execute SQL code. Either of the following compute models can be chosen for individual tasks:

• Snowflake-managed (i.e. serverless compute model)

• User-managed (i.e. virtual warehouse)

Note

Supported as a preview feature.

The serverless compute model for tasks enables you to rely on compute resources managed by Snowflake instead of user-managed virtual warehouses. The compute resources are automatically resized and scaled up or down by Snowflake as required for each workload. Snowflake determines the ideal size of the compute resources for a given run based on a dynamic analysis of statistics for the most recent previous runs of the same task.

Billing for runs of serverless tasks differs somewhat from the standard credit consumption model for tasks that rely on warehouses for compute resources. For information, see Billing for Task Runs.

Note

Serverless tasks cannot invoke the following object types:

• UDFs (user-defined functions) that contain Java code.

• Stored procedures written in Scala (using Snowpark), or which call UDFs that contain Java code.

If you prefer, you can alternatively manage the compute resources for individual tasks by specifying an existing virtual warehouse when creating the task. This option requires that you choose a warehouse that is sized appropriately for the SQL actions that are executed by the task.

#### Choosing a Warehouse Size¶

If you choose to use existing warehouses to supply the compute resources for individual tasks, we recommend that you follow the best practices described in Warehouse Considerations. We suggest that you analyze the average run time for a given task or tree of tasks using a specific warehouse based on warehouse size and clustering, as well as whether or not the warehouse is shared by multiple processes or is dedicated to running this single task (or tree of tasks).

Query the TASK_HISTORY Account Usage view (in the SNOWFLAKE shared database). The average difference between the scheduled and completed times for a task is the expected average run time for the task, including any period in which the task was queued. A task is queued when other processes are currently using all of the compute resources in the warehouse.

Unless the SQL statements defined for the tasks can be optimized (either by rewriting the statements or using stored procedures), then this would be the expected average run time for the task (or tree of tasks). Choose the right size for the warehouse based on your analysis to ensure the task (or tree of tasks) finishes running within this window.

The following diagram shows a window of 1 minute in which a single task queued for 20 seconds and then ran for 40 seconds.

The following diagram shows a tree of tasks that requires 5 minutes on average to complete for each run. The diagram shows the window for 2 runs of the tree of tasks to complete. This window is calculated from the time the root task is scheduled to start until the last child task in the tree has completed running. In this example, the tree of tasks is shared with other, concurrent operations that queue while each of the 3 tasks in the tree is running. These concurrent operations consume all available resources when each task in the tree finishes running and before the next task starts running. As a result, the window for each task includes some amount of queuing while it waits for other operations to finish and relinquish compute resources.

Note that even if this tree of tasks ran on a dedicated warehouse, a brief lag would be expected after a predecessor task finishes running and any child task is executed; however, no queueing for shared resources with other operations would occur. The warehouse size you choose should be large enough to accommodate multiple child tasks that are triggered simultaneously by predecessor tasks.

Snowflake ensures only one instance of a task with a schedule (i.e. a standalone task or the root task in a tree of tasks) is executed at a given time. If a task is still running when the next scheduled execution time occurs, then that scheduled time is skipped.

Snowflake automatically resizes and scales the compute resources for serverless tasks. For tasks that rely on a warehouse to provide compute resources, choose an appropriate warehouse size for a given task to complete its workload within the defined schedule. For information, see Choosing a Warehouse Size (in this topic).

### Task Scheduling and Daylight Saving Time¶

The cron expression in a task definition supports specifying a time zone. A scheduled task runs according to the specified cron expression in the local time for a given time zone. Special care should be taken with regard to scheduling tasks for time zones that recognize daylight saving time. Tasks scheduled during specific times on days when the transition from standard time to daylight saving time (or the reverse) occurs can have unexpected behaviors.

For example:

• During the autumn change from daylight saving time to standard time, a task scheduled to start at 1 AM in the America/Los_Angeles time zone (i.e. 0 1 * * * America/Los_Angeles) would run twice: once at 1 AM and then again when 1:59:59 AM shifts to 1:00:00 AM local time. That is, there are two points in time when the local time is 1 AM.

• During the spring change from standard time to daylight saving time, a task scheduled to start at 2 AM in the America/Los_Angeles time zone (i.e. 0 2 * * * America/Los_Angeles) would not run at all because the local time shifts from 1:59:59 AM to 3:00:00 AM. That is, there is no point during that day when the local time is 2 AM.

To avoid unexpected task executions due to daylight saving time, either:

• Do not schedule tasks to run at a specific time between 1 AM and 3 AM (daily, or on days of the week that include Sundays), or

• Manually adjust the cron expression for tasks scheduled during those hours twice each year to compensate for the time change due to daylight saving time.

Users can define a simple tree-like structure of tasks that starts with a root task and is linked together by task dependencies. The current implementation supports a single path between any two nodes; i.e. an individual task can have only a single predecessor (parent) task. This differs from a Directed Acyclic Graph (DAG) structure, in which a single node can have multiple predecessors.

A simple tree of tasks is limited to a maximum of 1000 tasks total (including the root task). An individual task in the tree is limited to a single predecessor task; however, a task can have a maximum of 100 child tasks (i.e. other tasks that identify the task as a predecessor).

Note

A brief lag occurs after a predecessor task finishes running and any child task is executed.

All tasks in a simple tree must have the same task owner (i.e. a single role must have the OWNERSHIP privilege on all of the tasks in the tree) and be stored in the same database and schema.

When ownership of all tasks in a tree of tasks is transferred at once, through either of the following activities, the links between all tasks in the tree are retained:

• The current owner of all tasks that compose the tree of tasks is dropped (using DROP ROLE). Ownership of the objects owned by the dropped role is transferred to the role that executes the DROP ROLE command.

• Ownership of all tasks that compose the tree of tasks is explicitly transferred to another role (e.g. by executing GRANT OWNERSHIP on all tasks in a schema).

### Overlapping Tree of Task Runs¶

By default, Snowflake ensures only one instance of a particular tree of tasks is allowed to run at a time. The next run of a root task is scheduled only after all child tasks in the tree have finished running. This means that if the cumulative time required to run all tasks in the tree exceeds the explicit scheduled time set in the definition of the root task, at least one run of the task tree is skipped.

The behavior is controlled by the ALLOW_OVERLAPPING_EXECUTION parameter on the root task; the default value is FALSE.

In the following example, a run of a task tree is scheduled to start when a prior run has not completed yet. The period of overlap, or concurrency, is identified in red. The diagram also identifies the span of time when each task queued before running in the user-managed warehouse. Note that if Snowflake-managed compute resources are used, there is no queuing period:

Overlapping runs may be tolerated (or even desirable) when read/write SQL operations executed by overlapping runs of a task tree do not produce incorrect or duplicate data. However, for other trees, task owners (i.e. the role with the OWNERSHIP privilege on all tasks in the tree) should set an appropriate schedule on the root task and choose an appropriate warehouse size (or use Snowflake-managed compute resources) to ensure an instance of the task tree finishes to completion before the root task is next scheduled to run.

To better align a task tree with the schedule defined in the root task:

1. If feasible, increase the scheduling time between runs of the root task.

2. Consider modifying compute-heavy tasks to use Snowflake-managed compute resources. If the task relies on user-managed compute resources, increase the size of the warehouse that runs large or complex SQL statements or stored procedures in the task tree.

3. Analyze the SQL statements or stored procedure executed by each task. Determine if code could be rewritten to leverage parallel processing.

If none of the above solutions help, consider whether it is necessary to allow concurrent runs of the tree by setting ALLOW_OVERLAPPING_EXECUTION = TRUE on the root task. This parameter can be defined when creating a task (using CREATE TASK) or later (using ALTER TASK).

## Versioning of Runs¶

To modify or recreate any task in a tree of tasks, the root task must first be suspended (using ALTER TASK … SUSPEND). When the root task is suspended, all future scheduled runs of the root task are cancelled; however, if any tasks are currently running (i.e, the tasks in an EXECUTING state), these tasks and any descendent tasks continue to run using the current version.

Note

If the definition of a stored procedure called by a task changes while the tree of tasks is executing, the new programming could be executed when the stored procedure is called by the task in the current run.

## Setting Session Parameters for Tasks¶

You can set session parameters for the session in which a task runs. To do so, modify an existing task and set the desired parameter values (using ALTER TASKSET session_parameter = value[, session_parameter = value ... ]).

A task supports all session parameters. For the complete list, see Parameters.

Note that a task does not support account or user parameters.

This SQL command is useful for testing new or modified standalone tasks and task trees before you enable them to execute SQL code in production.

Call this SQL command directly in scripts or in stored procedures. In addition, this command supports integrating tasks in external data pipelines. Any third-party services that can authenticate into your Snowflake account and authorize SQL actions can execute the EXECUTE TASK command to run tasks.

Note

Supported as a preview feature.

The following roles (or roles with the specified privileges) can use SQL to view the task history within a specified date range:

• Task owner (i.e. role that has the OWNERSHIP privilege on a task).

• Any role that has the global MONITOR EXECUTION privilege.

SQL

Query the TASK_HISTORY table function (in the Snowflake Information Schema).

The costs associated with running a task to execute SQL code differ depending on the source of the compute resources for the task:

User-managed warehouse

Snowflake bills your account for credit usage based on warehouse usage while a task is running, similar to the warehouse usage for executing the same SQL statements in a client or the Snowflake web interface. Per-second credit billing and warehouse auto-suspend give you the flexibility to start with larger warehouse sizes and then adjust the size to match your task workloads.

Snowflake-managed resources (i.e. serverless compute model)

Snowflake bills your account based on the actual compute resource usage; in contrast with customer-managed virtual warehouses, which consume credits when active, and may sit idle or be overutilized. Charges are calculated based on total usage of the resources (including cloud service usage) measured in compute-hours credit usage.

Snowflake analyzes task runs in the task history dynamically to determine the ideal size of the compute resources, and suspends these compute resources to save costs. Snowflake manages load capacity, ensuring optimal compute resources to meet demand. To recover the management costs of Snowflake-provided compute resources, we apply a 1.5x multiplier to resource consumption. Note, however, that the serverless compute model could still reduce compute costs over user-managed warehouses; in some cases significantly.

Snowflake credits charged per compute-hour:

• Snowflake-managed compute resources: 1.5

• Cloud services: 1

Billing is similar to other Snowflake features such as Automatic Clustering of tables, Database Replication and Failover/Failback, and Snowpipe.

To retrieve the current credit usage for a specific task, query the SERVERLESS_TASK_HISTORY table function. Execute the following statement as the task owner (i.e. the role that has the OWNERSHIP privilege on the task):

SET num_credits = (SELECT SUM(credits_used)
)
);


Where:

<database_name>

Name of the database that contains the task.

<task_name>

To retrieve the current credit usage for all serverless tasks, query the SERVERLESS_TASK_HISTORY view. Execute the following statement as an account administrator (i.e. a user with the ACCOUNTADMIN role):

SELECT START_TIME, END_TIME, TASK_ID, TASK_NAME, CREDITS_USED, SCHEMA_ID, SCHEMA_NAME, DATABASE_ID, DATABASE_NAME


## Understanding the System Service¶

Snowflake runs tasks with the privileges of the task owner (i.e. the role that has OWNERSHIP privilege on the task), but task runs are not associated with a user. Instead, each run is executed by a system service. Tasks are decoupled from specific users to avoid complications that can arise when users are dropped, locked due to authentication issues, or have roles removed.

Because task runs are decoupled from a user, the query history for task runs are associated with the system service. SYSTEM is not a user in the account; it is a behind-the-scenes service. As such, there are no user credentials for this service, and no individual (from Snowflake or in your account) can assume its identity. Activity for the system service is limited to your account. The same encryption protections and other security protocols are built into this service as are enforced for other operations.

To support creating and managing tasks, Snowflake provides the following set of special DDL commands:

In addition, providers can view, grant, or revoke access to the necessary database objects for ELT using the following standard access control DDL:

To support retrieving information about tasks, Snowflake provides the following set of SQL functions:

### Access Control Privileges¶

Creating tasks requires a role with a minimum of the following privileges:

Object

Privilege

Notes

Account

Required only for tasks that rely on Snowflake-managed compute resources (serverless compute model).

Database

USAGE

Schema

Warehouse

USAGE

Required only for tasks that rely on user-managed warehouses for compute resources.

After a task is created, the task owner (i.e. the role that has the OWNERSHIP privilege on the task) must have the following privileges:

Object

Privilege

Notes

Account

Required to run any tasks the role owns. Revoking the EXECUTE TASK privilege on a role prevents all subsequent task runs from starting under that role.

Account

Required only for tasks that rely on Snowflake-managed compute resources.

Database

USAGE

Schema

USAGE

OWNERSHIP

Warehouse

USAGE

In addition, the role must have the permissions required to run the SQL statement executed by the task.

In addition to the task owner, a role that has the OPERATE privilege on the task can suspend or resume the task. This role must have the USAGE privilege on the database and schema that contain the task. No other privileges are required.

When a task is resumed, Snowflake verifies that the task owner role has the privileges listed in Owning Tasks (in this topic).

For ease of use, we recommend creating a custom role (e.g. taskadmin) and assigning the EXECUTE TASK privilege to this role. Any role that can grant privileges (e.g. SECURITYADMIN or any role with the MANAGE GRANTS privilege) can then grant this custom role to any task owner role to allow altering their own tasks. To remove the ability for the task owner role to execute the task, it is only necessary to revoke this custom role from the task owner role. Note that if you choose not to create this custom role, an account administrator must revoke the EXECUTE TASK privilege from the task owner role.

For example, create a custom role name taskadmin and grant that role the EXECUTE TASK privilege. Assign the taskadmin role to a task owner role named myrole:

USE ROLE securityadmin;

-- set the active role to ACCOUNTADMIN before granting the account-level privileges to the new role

-- set the active role to SECURITYADMIN to show that this role can grant a role to another role



For more information on creating custom roles and role hierarchies, see Configuring Access Control.

### Dropping a Task Owner Role¶

When the owner role of a given task (i.e. the role with the OWNERSHIP privilege on the task) is deleted, the task is “re-possessed” by the role that dropped the owner role. This ensures that ownership moves to a role that is closer to the root of the role hierarchy. When a task is re-possessed, it is automatically paused, i.e., all executions currently in flight complete processing, but new executions will not be scheduled until the task is resumed explicitly by the new owner. The rationale for this is to prevent a user with access to a particular role from leaving behind tasks that suddenly execute with higher permissions when the role is removed.

If the role that a running task is executing under is dropped while the task is running, the task completes processing under the dropped role.

## Workflow¶

This section provides a high-level overview of the task setup workflow.

1. Complete the steps in Creating a Task Administrator Role (in this topic) to create a role that can be used to execute the commands in the following steps.