Create dynamic tables¶
This topic outlines the key concepts for creating dynamic tables.
Before you begin, ensure you have the privileges for creating dynamic tables, and all objects used by the dynamic table query have change tracking enabled.
Some limitations might apply to creating dynamic tables. For a complete list, see Dynamic table limitations.
Enable change tracking¶
When creating a dynamic table with incremental refresh mode, if change tracking is not already enabled on the tables that it queries, Snowflake automatically attempts to enable change tracking on them. In order to support incremental refreshes, change tracking must be enabled with non-zero time travel retention on all underlying objects used by a dynamic table.
As base objects change, so does the dynamic table. If you recreate a base object, you must re-enable change tracking.
Note
Snowflake doesn’t automatically attempt to enable change tracking on dynamic tables created with full refresh mode.
To enable change tracking on a specific database object, use ALTER TABLE, ALTER VIEW, and similar commands on that object. The user creating the dynamic table must have the OWNERSHIP privilege to enable change tracking on all underlying objects.
To check if change tracking is enabled, use SHOW VIEWS, SHOW TABLES, and similar commands
on the underlying objects, and inspect the change_tracking
column.
Supported base objects¶
Dynamic tables support the following base objects:
Tables
Snowflake-managed Apache Iceberg™ tables
Externally managed Apache Iceberg™ tables
Example: Create a simple dynamic table¶
Suppose that you want to create a dynamic table that contains the product_id
and product_name
columns from a table named
staging_table
, and you decide:
You want the data in your dynamic table to be at most 20 minutes behind the data in
staging_table
.You want to use the warehouse
mywh
for the compute resources needed for the refresh.You want the refresh mode to be automatically chosen.
Snowflake recommends using the automatic refresh mode only during development. For more information, see Best practices for choosing dynamic table refresh modes.
You want the dynamic table to refresh synchronously at creation.
You want the refresh mode to be automatically chosen, and you want the dynamic table to refresh synchronously at creation.
To create this dynamic table, you would execute the following CREATE DYNAMIC TABLE SQL statement:
CREATE OR REPLACE DYNAMIC TABLE my_dynamic_table
TARGET_LAG = '20 minutes'
WAREHOUSE = mywh
REFRESH_MODE = auto
INITIALIZE = on_create
AS
SELECT product_id, product_name FROM staging_table;
For a complete list of parameters and variant syntax, see the CREATE DYNAMIC TABLE reference.
Create dynamic tables that read from Snowflake-managed or externally managed Apache Iceberg™ tables¶
Creating a dynamic table from an Iceberg table is similar to creating one from a regular table. Execute the CREATE DYNAMIC TABLE SQL statement as you would for a regular table, using either a Snowflake-managed table or a table managed by an external catalog as the base object.
Dynamic tables that read from a Snowflake-managed Iceberg table as the base table are useful if you want your pipelines to operate on data in a Snowflake-managed Iceberg table or if you want your pipelines to operate on Iceberg tables written by other engines. Note that external engines cannot write to Snowflake-managed Iceberg tables; they are read-write for Snowflake and read-only for external engines.
Dynamic tables that read from Iceberg tables managed by external (non-Snowflake) catalogs, such as AWS Glue and written by engines like Apache Spark, are useful for processing data from external data lakes. You can create dynamic tables on top of externally managed data, continuously processing it in Snowflake without duplicating or ingesting the data.
Limitations and considerations for using Iceberg tables¶
All limitations for regular dynamic tables and dynamic Iceberg tables still apply.
Additionally:
All limitations for Iceberg base tables apply. For more information, see Considerations and limitations.
You can create a dynamic table that reads from Snowflake native tables, Snowflake-managed Iceberg tables, and externally managed Iceberg tables.
Dynamic tables track changes at the file level for externally managed Iceberg base tables, unlike other base tables that track changes at the row level. Frequent copy-on-write operations (for example, updates or deletes) on externally managed Iceberg tables may impact the performance of incremental refreshes.
Create dynamic tables with immutability constraints¶
Immutability constraints give you more control over how and when your dynamic tables update. The constraints enable parts of the table to remain static, rather than the whole table always reflecting the latest query results.
By marking specific portions of a dynamic table as immutable, you can perform the following tasks:
Prevent propagation of updates or deletions to existing data.
Restrict inserts, updates, and deletes for rows that meet a condition.
Limit future modifications while still enabling incremental updates to other parts of the table.
The rest of the table (that is, the rows that don’t match the immutability condition) remains mutable and can be updated during a refresh.
To enforce immutability constraints, specify the IMMUTABLE WHERE parameter when you run the CREATE DYNAMIC TABLE
or ALTER DYNAMIC TABLE command. To see whether or not a row is mutable, use the METADATA$IS_IMMUTABLE
column.
For example, SELECT * , METADATA$IS_IMMUTABLE FROM my_dynamic_table
.
The IMMUTABLE WHERE predicate is ignored during the initial refresh but applies to all subsequent refreshes. In full refresh mode, it limits recomputation to only the rows that don’t match the condition. Streams and dynamic tables in incremental refresh mode can read from these full refresh tables.
Examples of using immutability constraints¶
The following example creates a dynamic table and sets it as immutable by using the IMMUTABLE WHERE parameter.
CREATE DYNAMIC TABLE my_dynamic_table (id1 INT)
TARGET_LAG = '20 minutes'
WAREHOUSE = mywh
IMMUTABLE WHERE ( <expr> )
AS
SELECT id AS id1 FROM staging_table;
Columns referenced in the IMMUTABLE WHERE condition must be columns in the dynamic table, not columns from the base table. For example, the
IMMUTABLE WHERE expression in the example above can only use id1
, not id
.
You can only set a single IMMUTABLE WHERE condition on a dynamic table. To replace an existing predicate, you use the ALTER DYNAMIC TABLE command as shown in the following example:
-- Set or replace an existing predicate:
ALTER DYNAMIC TABLE my_dynamic_table SET IMMUTABLE WHERE ( <expr> );
-- Remove an existing predicate:
ALTER DYNAMIC TABLE my_dynamic_table UNSET IMMUTABLE;
To view the immutability constraints on your dynamic tables, run the SHOW DYNAMIC TABLES command. The
immutable_where
column shows the IMMUTABLE WHERE constraint set on the table, or NULL if none is set.
For information about compute cost, see Compute cost for immutability constraints.
Example: Use IMMUTABLE WHERE to prevent recomputation of old data if a dimension table changes¶
In the following example, updating a row in the dimension table causes facts from the last day that join with it to be reprocessed, rather than all historical facts:
CREATE DYNAMIC TABLE joined_data
TARGET_LAG = '1 minute'
WAREHOUSE = mywh
IMMUTABLE WHERE (timestamp_col < CURRENT_TIMESTAMP() - INTERVAL '1 day')
AS
SELECT F.primary_key primary_key, F.timestamp_col timestamp_col, D.value dim_value
FROM fact_table F
LEFT OUTER JOIN dimension_table D USING (primary_key);
Example: Unlimited retention dynamic table and limited retention base table¶
The following example creates a staging table with a limited window of recently added data and a dynamic table that stores all parsed and filtered data.
CREATE TABLE staging_data (raw TEXT, ts TIMESTAMP);
CREATE DYNAMIC TABLE parsed_data
TARGET_LAG = '1 minute'
WAREHOUSE = mywh
IMMUTABLE WHERE (ts < CURRENT_TIMESTAMP() - INTERVAL '7 days')
AS
SELECT parse_json(raw):event_id::string event_id, parse_json(raw):name::string name, parse_json(raw):region::string region, ts
FROM staging_data WHERE region = 'US';
-- Delete old staging data using Task
CREATE TASK delete_old_staging_data
WAREHOUSE = mywh
SCHEDULE = '24 hours'
AS
DELETE FROM staging_data WHERE ts < CURRENT_TIMESTAMP() - INTERVAL '30 days';
Example: Full refresh with IMMUTABLE WHERE¶
Suppose a dynamic table must be in full refresh mode because of limitations on query constructs, such as the use of a Python UDTF. Although this would normally prevent incremental processing, defining an immutable region with the IMMUTABLE WHERE clause lets a downstream dynamic table to remain incremental, still benefiting from performance optimizations, even if the upstream table needs to be fully refreshed due to query construct limitations.
The following example shows a dynamic table that uses a Python UDTF, making it non-incrementalizable:
CREATE FUNCTION my_udtf(x varchar)
RETURNS TABLE (output VARCHAR)
LANGUAGE PYTHON
AS $$ ... $$;
CREATE DYNAMIC TABLE udtf_dt
TARGET_LAG = '1 hour'
WAREHOUSE = mywh
REFRESH_MODE = FULL
IMMUTABLE WHERE (ts < current_timestamp() - interval '1 day')
AS
SELECT ts, data, output, join_key FROM input_table, TABLE(my_udtf(data));
CREATE DYNAMIC TABLE incremental_join_dt
TARGET_LAG = '1 hour'
WAREHOUSE = mywh
REFRESH_MODE = INCREMENTAL
IMMUTABLE WHERE (ts < current_timestamp() - interval '1 day')
AS
SELECT * FROM udtf_dt JOIN dim_table USING (join_key);
Limitations and considerations when setting immutability constraints¶
All limitations of regular dynamic tables still apply.
Additionally:
There can only be a single IMMUTABLE WHERE predicate on a dynamic table. Setting another one with an ALTER DYNAMIC TABLE … SET IMMUTABLE WHERE command replaces any existing predicate.
IMMUTABLE WHERE constraints can’t contain the following items:
Subqueries.
Nondeterministic functions, except for timestamp functions like CURRENT_TIMESTAMP()` or CURRENT_DATE(). If you use timestamp functions, you must use them so that the immutable region grows over time. For example,
TIMESTAMP_COL < CURRENT_TIMESTAMP()
is allowed butTIMESTAMP_COL > CURRENT_TIMESTAMP()
isn’t.User-defined or external functions.
Metadata columns; for example, those starting with
METADATA$
.Columns that are the result of aggregates, window functions or non-deterministic functions, or columns that are passed through a window function operator. In the following dynamic tables, only
col3
can be used in IMMUTABLE WHERE predicates:CREATE DYNAMIC TABLE aggregates TARGET_LAG = '1 minute' WAREHOUSE = mywh AS SELECT col1, SUM(col2) AS col2 FROM input_table GROUP BY col3; CREATE DYNAMIC TABLE window_fns TARGET_LAG = '1 minute' WAREHOUSE = mywh AS SELECT col3, SUM(col4) OVER (PARTITION BY col3 ORDER BY col4) AS col2, col5 FROM input_table;
IMMUTABLE WHERE constraints are copied during cloning and replication with no limitations.
Create dynamic tables by using backfill¶
Backfilling is a zero-copy, low-cost operation that makes source data immediately available in a dynamic table. You can create a dynamic table with initial data backfilled from a regular table while still defining a custom refresh query for future updates. When using immutability constraints, only the immutable region is backfilled and remains unchanged even if it no longer matches the source. The mutable region is computed from the dynamic table’s definition and base tables as usual.
Only data defined by the IMMUTABLE WHERE immutability constraint can be backfilled because the backfill data must remain unchanged, even if it differs from the upstream source.
Examples of using backfill¶
The following examples show how to create new dynamic tables from tables with backfilled data.
Each column name must be found in the backfill table with compatible data types and must appear in the same order as the backfill table. Table properties and privileges from the backfill table aren’t copied.
If Time Travel parameters AT | BEFORE
are specified, data from the backfill table is copied at the specified time.
Example: Backfill from a part of the table¶
The following example backfills the immutable region of my_dynamic_table
from my_backfill_table
and the mutable region from the dynamic
table’s definition.
In this scenario, if the dynamic table is created with incremental refresh mode, reinitialization deletes all rows that are mutable and repopulates only the mutable region. If the dynamic table is created with full refresh mode, a full refresh is triggered with the same effect.
CREATE DYNAMIC TABLE my_dynamic_table (day TIMESTAMP, totalSales NUMBER)
IMMUTABLE WHERE (day < '2025-01-01')
BACKFILL FROM my_backfill_table
TARGET_LAG = '20 minutes'
WAREHOUSE = 'mywh'
AS SELECT DATE_TRUNC('day', ts) AS day, sum(price)
FROM my_base_table
GROUP BY day;
Example: Recover or modify data in a dynamic table by using backfill¶
You can’t directly edit a dynamic table’s data or definition. To recover or fix data, complete the following workaround steps:
Clone the dynamic table to a regular table.
Modify the cloned table as needed.
Backfill from the edited table into a new dynamic table.
In the following example, my_dynamic_table
aggregates daily sales data from the sales
base table:
CREATE OR REPLACE TABLE sales(item_id INT, ts TIMESTAMP, sales_price FLOAT);
INSERT INTO sales VALUES (1, '2025-05-01 01:00:00', 10.0), (1, '2025-05-01 02:00:00', 15.0), (1, '2025-05-01 03:00:00', 11.0);
INSERT INTO sales VALUES (1, '2025-05-02 00:00:00', 11.0), (1, '2025-05-02 05:00:00', 13.0);
CREATE DYNAMIC TABLE my_dynamic_table
TARGET_LAG = 'DOWNSTREAM'
WAREHOUSE = mywh
INITIALIZE = on_create
IMMUTABLE WHERE (day <= '2025-05-01')
AS
SELECT item_id, date_trunc('DAY', ts) day, count(sales_price) AS sales_count FROM sales
GROUP BY item_id, day;
SELECT item_id, to_char(day, 'YYYY-MM-DD') AS day, sales_count FROM my_dynamic_table;
+---------+------------+-------------+
| ITEM_ID | DAY | SALES_COUNT |
|---------+------------+-------------|
| 1 | 2025-05-01 | 3 |
| 1 | 2025-05-02 | 2 |
|---------+-------------+------------|
Optionally, you can archive the old data to save storage cost:
DELETE FROM sales WHERE ts < '2025-05-02';
ALTER DYNAMIC TABLE my_dynamic_table REFRESH;
SELECT item_id, to_char(day, 'YYYY-MM-DD') AS day, sales_count FROM my_dynamic_table;
Later, you find a sales error on 2025-05-01
, where sales_count
should be 2. To correct this:
Clone
my_dynamic_table
to a regular table:CREATE OR REPLACE TABLE my_dt_clone_table CLONE my_dynamic_table;
Update the cloned table:
UPDATE my_dt_clone_table SET sales_count = 2 WHERE day = '2025-05-01'; SELECT item_id, to_char(day, 'YYYY-MM-DD') AS day, sales_count FROM my_dt_clone_table;
+---------+------------+-------------+ | ITEM_ID | DAY | SALES_COUNT | |---------+------------+-------------| | 1 | 2025-05-01 | 2 | | 1 | 2025-05-02 | 2 | |---------+-------------+------------|
Recreate the dynamic table by using the edited clone as the backfill source.
CREATE OR REPLACE DYNAMIC TABLE my_dynamic_table BACKFILL FROM my_dt_clone_table IMMUTABLE WHERE (day <= '2025-05-01') TARGET_LAG = 'DOWNSTREAM' WAREHOUSE = mywh INITIALIZE = on_create AS SELECT item_id, date_trunc('DAY', ts) day, count(sales_price) AS sales_count FROM sales GROUP BY item_id, day;
This approach lets you recover or correct data in a dynamic table without modifying the base table:
SELECT item_id, to_char(day, 'YYYY-MM-DD') AS day, sales_count FROM my_dynamic_table;
+---------+------------+-------------+ | ITEM_ID | DAY | SALES_COUNT | |---------+------------+-------------| | 1 | 2025-05-01 | 2 | | 1 | 2025-05-02 | 2 | |---------+-------------+------------|
Example: Modify a dynamic table’s schema by using backfill¶
You can’t directly alter the schema of a dynamic table. To update the schema — for example, add a column — follow these steps:
Clone the dynamic table to a regular table. The following example uses
my_dynamic_table
created fromsales
(above).CREATE OR REPLACE TABLE my_dt_clone_table CLONE my_dynamic_table;
Modify the schema of the cloned table:
ALTER TABLE my_dt_clone_table ADD COLUMN sales_avg FLOAT; SELECT item_id, to_char(day, 'YYYY-MM-DD') as DAY, SALES_COUNT, SALES_AVG FROM my_dt_clone_table;
Optionally, populate data to the new column.
Recreate the dynamic table by using the edited clone as the backfill source.
CREATE OR REPLACE DYNAMIC TABLE my_dynamic_table BACKFILL FROM my_dt_clone_table IMMUTABLE WHERE (day <= '2025-05-01') TARGET_LAG = 'DOWNSTREAM' WAREHOUSE = mywh INITIALIZE = on_create AS SELECT item_id, date_trunc('DAY', ts) day, count(sales_price) AS sales_count, avg(sales_price) as sales_avg FROM sales GROUP BY item_id, day;
This approach lets you recover or correct data in a dynamic table without modifying the base table:
SELECT item_id, to_char(day, 'YYYY-MM-DD') as DAY, SALES_COUNT, SALES_AVG, metadata$is_immutable as IMMUTABLE from my_dynamic_table ORDER BY ITEM_ID, DAY;
+---------+------------+-------------+-----------+-----------+ | ITEM_ID | DAY | SALES_COUNT | SALES_AVG | IMMUTABLE | |---------+------------+-------------|-----------|-----------| | 1 | 2025-05-01 | 3 | NULL | TRUE | | 1 | 2025-05-02 | 2 | 12 | FALSE | |---------+-------------+------------+-----------|-----------+
Limitations and considerations for using backfill¶
All limitations of regular dynamic tables and the limitations of immutability constraints still apply.
Additionally, the following limitations and considerations apply:
Only regular tables are currently supported as backfill tables.
You can’t specify policies or tags in the new dynamic table because they are copied from the backfill table.
Clustering keys in the new dynamic table and backfill table must be the same.
If you create a dynamic table with BACKFILL FROM, you can’t remove its IMMUTABLE WHERE constraint later.
Best practices for creating dynamic tables¶
Chain together pipelines of dynamic tables¶
When defining a new dynamic table, rather than defining a large dynamic table with many nested statements, use small dynamic tables with pipelines instead.
You can set up a dynamic table to query other dynamic tables. For instance, imagine a scenario where your data pipeline extracts data from a
staging table to update various dimension tables (e.g., customer
, product
, date
and time
). Additionally, your
pipeline updates an aggregate sales
table based on the information from these dimension tables. By configuring the dimension tables to
query the staging table and the aggregate sales
table to query the dimension tables, you create a cascade effect similar to a task
graph.
In this setup, the refresh for the aggregate sales
table executes only after the refreshes for the dimension tables have successfully
completed. This ensures data consistency and meets lag targets. Through an automated refresh process, any changes in the source tables trigger
refreshes in all dependent tables at the appropriate times.

Use a “controller” dynamic table for complex task graphs¶
When you have a complex graph of dynamic tables with many roots and leaves and you want to perform operations (e.g. changing lag, manual refresh, suspension) on the full task graph with a single command, do the following:
Set the value for the
TARGET_LAG
of all of your dynamic tables toDOWNSTREAM
.Create a “controller” dynamic table that reads from all of the leaves in your task graph. To ensure this controller doesn’t consume resources, do the following:
CREATE DYNAMIC TABLE controller TARGET_LAG = <target_lag> WAREHOUSE = <warehouse> AS SELECT 1 A FROM <leaf1>, …, <leafN> LIMIT 0;
Use the controller to control the whole graph. For example:
Set a new target lag for the task graph.
ALTER DYNAMIC TABLE controller SET TARGET_LAG = <new_target_lag>;Manually refresh the task graph.
ALTER DYNAMIC TABLE controller REFRESH;
Use transient dynamic tables to reduce storage cost¶
Transient dynamic tables maintain data reliably over time and support Time Travel within the data retention period, but don’t retain data beyond the fail-safe period. By default, dynamic table data is retained for seven days in fail-safe storage.
For dynamic tables with high refresh throughput, this can significantly increase storage consumption. Therefore, you should make a dynamic table transient only if its data doesn’t need the same level of data protection and recovery provided by permanent tables.
You can create a transient dynamic table or clone existing dynamic tables to transient dynamic tables using the CREATE DYNAMIC TABLE statement.
Troubleshoot dynamic table creation¶
When you create a dynamic table, the initial refresh happens either on a schedule (ON_SCHEDULE
) or immediately at creation
(ON_CREATE
). The initial data population, or initialization, depends on when this initial
refresh occurs. For example, for ON_CREATE
, initialization might take longer if it triggers refreshes of upstream dynamic tables.
Initialization can take some time, depending on how much data is scanned. To view progress, do the following:
Sign in to Snowsight.
In the navigation menu, select Monitoring » Query History.
In the Filters dropdown, enter CREATE DYNAMIC TABLE in the SQL Text filter and enter your warehouse name in the Warehouse filter.
Select the query with your dynamic table under SQL text and use the Query Details and Query Profile tabs to track progress.