Snowflake interactive tables and interactive warehouses¶
Overview¶
Snowflake interactive tables and interactive warehouses are specialized types of Snowflake objects that are optimized for low-latency, high concurrency workloads. Ideal for use cases such as real-time dashboards, data-powered APIs, and serving high-concurrency workloads.
- Interactive warehouse
A warehouse that’s optimized for low-latency, interactive workloads. The warehouse contains a query engine that is optimized for low-latency, high concurrency queries.
- Interactive table
A type of Snowflake table that’s optimized for low latency, high concurrency workloads that works well with interactive warehouses and can be used with standard Snowflake warehouses. You get the best performance gains when you query these tables through interactive warehouses.
Use cases for interactive tables¶
- Real-time dashboards
Serving dashboard queries that powers thousands of users requests with low-latency, high concurrency. Especially useful for serving use cases where some aggregations and flexibility are required.
- Data-powered APIs
Serving data-powered APIs that require predictable, consistency latency, that contain repetitive query shapes.
- Alerting and agentic AI workloads
For observability and AI agentic workloads that can generate unpredictable query load spikes and requires low cost per query.
Getting started with interactive tables¶
To get started with interactive tables, complete the following sequence of steps:
Create an interactive table, using a standard warehouse. For more information, see Creating an interactive table.
Create an interactive warehouse. For more information, see Creating an interactive warehouse.
Resume an interactive warehouse. For more information, see Resuming and suspending an interactive warehouse.
Add an interactive table to the interactive warehouse. For more information, see Adding an interactive table to an interactive warehouse.
Start querying an interactive table through the interactive warehouse. For more information, see Querying an interactive table.
Working with interactive tables and interactive warehouses¶
The following procedures explain how to create and manage all the required objects to run queries using interactive tables. When you are trying this feature for the first time, perform these procedures in the following order.
Creating an interactive table¶
Table creation follows the standard CTAS (CREATE TABLE AS SELECT) syntax, with the additional INTERACTIVE keyword that defines the table type.
The CREATE INTERACTIVE TABLE command also requires a CLUSTER BY clause. Specify one or more columns in the CLUSTER BY clause to match the WHERE clauses in your most time-critical queries. The columns you specify in the CLUSTER BY clause can significantly affect the performance of queries on the interactive table. Therefore, choose the clustering columns carefully. For more information about choosing the best clustering columns, see Clustering Keys & Clustered Tables.
Note
You run the CREATE INTERACTIVE TABLE command with a standard warehouse. You only use the interactive warehouse in later steps, to query the interactive table.
The following command creates an interactive table containing the same columns and data
as a standard table. The CLUSTER BY clause refers to a column named id from the source table.
Specifying auto-refresh for an interactive table¶
To make an interactive table automatically refresh using data from some other table, specify the TARGET_LAG clause with an interval. When you specify TARGET_LAG, you must also specify the WAREHOUSE clause and the name of a standard warehouse that Snowflake will use for regular maintenance refreshes.
You can also optionally specify INITIALIZATION_WAREHOUSE to run initial refreshes on a separate warehouse. Initial refreshes often process more data than maintenance refreshes. In many cases, you can use a larger warehouse, such as 2XL, for the initial refresh and a smaller warehouse, such as S, for ongoing maintenance refreshes.
The time interval for the TARGET_LAG clause lets you specify the maximum lag in terms of some number of minutes, hours, or days:
If you don’t specify a unit, the number represents seconds. The minimum value is 60 seconds, or 1 minute.
For example, the following CREATE INTERACTIVE TABLE statement defines an interactive table that lags no more than 20 minutes behind a specified source table, uses a larger warehouse for the initial refresh, and uses a smaller warehouse for ongoing maintenance refreshes:
For more information about choosing an appropriate lag time that balances costs and freshness of data, see How Snowflake schedules refreshes. For guidance on using separate warehouses for initial and maintenance refreshes, see Adjust your warehouse configuration. Similar considerations apply to interactive tables as to dynamic tables.
You can also manually trigger a refresh for a dynamic interactive table by running
Creating an interactive warehouse¶
After you create an interactive table, querying that table with optimal performance requires an interactive warehouse. Specify the keyword INTERACTIVE in the CREATE WAREHOUSE or CREATE OR REPLACE WAREHOUSE command.
Optionally, you can specify a TABLES clause with a comma-separated list of interactive table names. Using that clause immediately associates those interactive tables with the interactive warehouse.
The following command creates an interactive warehouse that’s associated with the interactive table named orders. In this case, you can immediately run a USE WAREHOUSE command for the interactive warehouse, and begin running queries for the interactive table:
The following command creates an interactive warehouse with no associated interactive tables. In this case, you run ALTER WAREHOUSE commands afterward to associate interactive tables with the interactive warehouse:
After you create an interactive warehouse, it remains in a suspended state until you resume it. You can configure auto-suspend and auto-resume for interactive warehouses. The minimum auto-suspend interval for an interactive warehouse is 24 hours (86400 seconds). For details, see Resuming and suspending an interactive warehouse.
Resuming and suspending an interactive warehouse¶
The following command resumes an interactive warehouse. You must do this after creating the warehouse, because it’s created in a suspended state:
You also do this to start running queries through the warehouse, if you manually suspended the warehouse.
Queries will be slow while the cache is being warmed after resuming. Warming speed is based on the size of the data and the warehouse size. A XS warehouse warms roughly at 300-400MB/s. The bigger the table, the longer the cache warming time. Larger warehouses warms faster.
The following command suspends an interactive warehouse:
Auto-suspend and auto-resume for interactive warehouses¶
Interactive warehouses support auto-suspend and auto-resume. You can set the AUTO_SUSPEND and AUTO_RESUME properties when creating or altering an interactive warehouse.
The minimum AUTO_SUSPEND value for an interactive warehouse is 86400 seconds (24 hours). This minimum ensures that the cache stays warm long enough to provide consistent low-latency performance. AUTO_SUSPEND will be set to 86400 if you specify a value less than 86400.
The following example creates an interactive warehouse with auto-suspend after 24 hours of inactivity, and auto-resume enabled:
You can also set these properties on an existing interactive warehouse:
Note
In a production environment, you typically use interactive warehouses for workloads running many concurrent queries 24x7, or where low latency (e.g. <1 second) is crucial for queries. Suspending and resuming an interactive warehouse (whether manually or through auto-suspend) incurs cache warm-up time (this can be significant when you have large tables). Evaluate whether auto-suspend is appropriate for your workload pattern.
Adding an interactive table to an interactive warehouse¶
To get optimal query performance for an interactive table, you should use an interactive warehouse.
Before you query an interactive table from an interactive warehouse, you can add interactive tables to the interactive warehouse. Adding a table to an interactive warehouse indicates to Snowflake that this table need to be maintained in the data cache. If you didn’t specify the interactive tables to associate with the interactive warehouses by using the TABLES clause in your CREATE INTERACTIVE WAREHOUSE command, you can do that later by using an ALTER WAREHOUSE command.
The following command associates the orders table with the interactive_demo warehouse. You can specify multiple table names, separated by commas, with the ADD TABLES clause.
If the interactive table is already associated with the interactive warehouse, the command succeeds but has no effect. You can associate an interactive table with multiple interactive warehouses.
This action starts the data cache-warming process. Cache warming time is based on the size of the data and the warehouse size. A XS warehouse warms roughly at 300-400MB/s. The bigger the table, the longer the cache warming time. Larger warehouses warms faster.
Warming process do not block the warehouse from accepting new queries. Priority of warming is:
User issued queries
Newly added micropartitions through auto-refresh or other means of data ingestion
Any existing data
While not strictly required, we strongly recommend you to add tables you use frequently to your interactive warehouse. This will help you achieve lower latency and higher concurrency.
Because cache warming depends on your queries, the best way to monitor whether the cache is warm is to review the remote read percentage in Snowsight Query Profile. For programmatic access to query operator statistics, see GET_QUERY_OPERATOR_STATS. In ideal execution scenarios, low-latency queries should have a remote read percentage of 0%.
Querying an interactive table¶
In your query session, make sure that the warehouse for your current session is an interactive warehouse:
After this, you can query your interactive table normally.
Note
In an interactive warehouse, you can only query interactive tables. To query other types of Snowflake tables, such as standard tables or hybrid tables, switch to a standard warehouse first.
Certain types of queries are especially suited for interactive tables. For more information, see Use cases for interactive tables.
Removing an interactive table from an interactive warehouse¶
You can detach one or more interactive tables from an interactive warehouse by running an ALTER WAREHOUSE command with the DROP TABLES clause.
Note
The interactive tables still exist after this operation. This ALTER WAREHOUSE clause isn’t the same as performing the SQL command DROP TABLE.
Automatically handling statement timeouts¶
In order to optimize for high concurrency queries on interactive warehouses, Snowflake automatically cancels queries after 5 seconds. You have an option to specify a fallback warehouse to allow automatic re-run of timed-out queries. By offloading timed-out queries to a secondary compute resource, you ensure that temporary resource contention or large scans do not block the interactive user experience.
This fallback behavior is transparent to the client issuing the query, it behaves as an internal retry mechanism.
Configuring a fallback warehouse provides an automated retry mechanism for these canceled queries that exceed 5 seconds. When a fallback warehouse is added to an interactive warehouse, any query that exceeds the timeout threshold on the primary warehouse is automatically cancelled and retried on the designated fallback warehouse.
Note
The fallback warehouse is a standard warehouse and can be shared with non-interactive workloads. You should usually pick a warehouse size that’s either the same or larger than the interactive warehouse.
Timeout Threshold: The STATEMENT_TIMEOUT_IN_SECONDS for queries on Interactive Warehouses are fixed at a default and maximum of 5 seconds. If a query does not complete within this window, then Snowflake automatically retries the query on the fallback warehouse.
Retry Logic: When a retry on fallback warehouse occurs, the failed query time shows up in fault_handling_time in query profile.
Warehouse State: The fallback warehouse must be started (or set to auto-resume) to accept the retried query. Standard credit consumption applies to the fallback warehouse once it is active.
RBAC requirements: To query with fallback support, the querying role must have USAGE on both the interactive warehouse and its fallback warehouse. To set a fallback warehouse, the administrator role must have ALTER WAREHOUSE on the interactive warehouse and USAGE on the fallback warehouse.
To set a fallback warehouse, you can use the following command:
To remove a fallback warehouse, you can use the following command:
To view the fallback warehouse for an interactive warehouse, you can use the following command:
and inspect the FALLBACK_WAREHOUSE column.
Interactive table performance considerations¶
The following sections explain how to solve performance issues that you might encounter due to the special characteristics of interactive tables and the workloads they’re best suited for.
Query best practices for interactive warehouses¶
Interactive warehouses are optimized for queries with selective workloads. This means queries with good selectivity see substantially more improvements on performance than other query types.
Scenario 1: Narrow projection versus full-table projection¶
This query is likely to see more benefit:
This query is highly selective because it only requires a few columns. Snowflake can optimize loading only columns required for this one query.
This query is likely to see limited benefit:
This query processes all columns. Although the query is simple, Snowflake must process a large amount of data, which might exceed the size of the cache. Even if the content of the table can fit in the cache, causing other queries to load new data into the cache at runtime, leading to lower concurrency.
Scenario 2: Targeted filters versus broad time-range filters¶
This query is likely to see more benefit:
The conditions in the WHERE clause make this query highly selective. The IN clause limits the results to a relatively few items, and the time comparison further limits the data to a certain time period.
This query is likely to see limited benefit:
Asking for data for an entire year makes this query less selective. If your dataset is big, this query might process all rows in the table.
Other complexities such as large joins (for example, by joining two fact tables), or compute-intensive expressions such as regular expressions, might result in lower concurrency due to higher use of compute resources. See Choosing a size for an interactive warehouse for information about optimizing for those situations.
Data layout best practices for interactive tables¶
Interactive tables follow standard Snowflake best practices for performance. In particular,
interactive tables benefit from a well-clustered table, a table that’s sorted based on the same
column or columns that you are filtering on. For example, if your query often filters on a TIMESTAMP
column such as sale_date, then it makes sense to use that column as the clustering key when creating
the interactive table. For example, you might create the interactive table as follows:
That way, SELECT queries that filter on sale_date can quickly skip all irrelevant data
and return results. For example, the following query filters on a date range by testing the
sale_date column:
For more details about choosing the best clustering keys, see Clustering Keys & Clustered Tables.
Snowflake Cortex Code supports automatic clustering key selection for interactive tables. To use it, start Cortex Code and prompt it to suggest a clustering key using the interactive-clustering-key-recommendation skill.
Choosing a size for an interactive warehouse¶
You should start by sizing your warehouse based on the approximate size of the working set in the interactive table. Interactive warehouse will pre-warm the data cache based on tables you’ve attached to the warehouse via ALTER WAREHOUSE ADD TABLES(table_1, table_2, …). command.
For example, if your table is 120TB (compressed), and it represents 1 year worth of data. If you mostly query the last 7 days, then you should size your warehouse as MEDIUM to be able to hold the last 7 days of data. (120TB / 365 days * 7 days = 2.19TB) Interactive warehouses have a range of sizes from XSMALL to 4XLARGE, see the warehouse sizing table for more information.
Once you’ve completed all your queries and layout optimizations, consider scaling your warehouse to meet demand. Choose a minimum number of clusters sufficient to support the expected number of concurrent users, see Multi-cluster warehouses.
This is because the interactive warehouse utilizes local storage caching. While the data for your entire data set (table) is always accessible, accessing non-cached data does incur higher read latency on the first read.
Choose a warehouse size to fit the needs of your workloads. Experiment with your data and workload to determine the optimal size for your interactive warehouse.
Tip
For good performance, pick a cache size that’s sufficient to hold your working set. The warehouse doesn’t need fit your entire table into the cache.
Working Set |
Warehouse Size |
|---|---|
Less than ~350 GB |
XSMALL |
~350 GB to ~600 GB |
SMALL |
~600 GB to ~1.2 TB |
MEDIUM |
~1.2 TB to ~2.5 TB |
LARGE |
~2.5 TB to ~5.5 TB |
XLARGE |
5.5 TB to ~11 TB |
2XLARGE |
~11 TB to ~22 TB |
3XLARGE |
~22 TB to ~44 TB |
4XLARGE |
Performance optimization for interactive analytics¶
Interactive analytics is optimized for short-running queries and enforce a default statement timeout of 5s. You might encounter patterns where queries run slower than expected. The following sections describe common problems, how to recognize them, and what to try.
For all of these patterns, your primary diagnostic tool is the Snowsight Query Profile, especially the remote read percentage and the time breakdown between compilation, queuing, and execution. For programmatic access to query operator statistics, see GET_QUERY_OPERATOR_STATS.
Problem 1: A single query is taking too long¶
Likely causes:
The query needs more compute resources than the warehouse provides. Examples include predicates with regular expressions or
CASEexpressions, and high-memory operations likeCOUNT(DISTINCT ...).The warehouse is too small to hold a useful working set in the local data cache, so the query reads heavily from remote storage. You can see this in Query Profile as a high remote read percentage.
The query scans more data than necessary because the
ORDER BY,LIMIT, orWHEREclauses aren’t aligned with the table’s clustering keys, so micropartition pruning isn’t effective.
What to try:
Increase the warehouse size. Start with the recommended size for your working set (see the sizing table above) and keep increasing until single-query latency is acceptable.
Select only the columns you need, and add selective predicates in the
WHEREclause so the query can effectively prune micropartitions.For
ORDER BY+LIMITqueries (for example, “most recent N rows”), make sure the clustering key of the interactive table matches the column used inORDER BY. A table clustered by its timestamp column prunes efficiently for “most recent” queries; one that isn’t clustered by the sort column might read the full table. For help choosing clustering keys, see Cortex Code clustering key recommendation.
Problem 2: Queries are suddenly taking longer (high tail latency, high P95 latency)¶
In Query Profile, the slow queries show an elevated remote read percentage compared to earlier fast queries in the same session.
Likely causes:
The warehouse was suspended between bursts and resumed for this burst. Cache warming hasn’t finished when the first queries arrive.
The new burst touches different micro partitions of the table than the prior workload, so the first queries read from remote storage until the new micro partitions are cached.
The total size of all interactive tables attached to the warehouse far exceeds the data cache capacity, so different query patterns keep evicting each other’s cached data. In this case you see chronic high remote reads across all queries, not just the first few in a burst.
The first one or two queries immediately after a resume can show uneven per-worker latency as different nodes finish warming at slightly different times. This is typically self-resolving after a few queries.
What to try:
If chronic high remote reads persist across all queries (not only the first few in a burst), your attached table footprint probably exceeds the data cache capacity of the warehouse. Detach unused interactive tables with
ALTER WAREHOUSE ... DROP TABLES, or use a larger warehouse size.For benchmarks, wait for cache warming to finish before measuring latency. Cache warm-up speed depends on warehouse size and table size: bigger tables take longer, bigger warehouses warm faster.
Problem 3: The first query of a new shape is slow, later similar queries are fast¶
In Query Profile, the slow first query spends most of its time in compilation, and later queries of the same shape compile quickly.
Likely cause:
Query metadata caches on the compilation cache are cold for this query pattern.
What to try:
Use bind variables instead of embedding literal values directly in SQL text. Queries that differ only in their literal values (for example, different filter parameters from a dashboard) share the same query shape, so they all benefit from a single warm compilation cache entry.
Remember that steady production query volume keeps these caches warm. Latency measured at very low throughput does not reflect what you’ll see at realistic load.
Problem 4: Queries are queuing or you’re not achieving the expected concurrency¶
In Query Profile, queued queries spend a significant portion of their time queuing before execution starts.
Likely causes:
The peak of concurrent queries exceeds the warehouse’s concurrency capacity.
What to try:
Scale out by setting
MIN_CLUSTER_COUNTandMAX_CLUSTER_COUNTto run a multi-cluster interactive warehouse. IfMAX_CLUSTER_COUNTis greater thanMIN_CLUSTER_COUNT, Snowflake adds clusters automatically as load increases.If your workload has a predictable ramp-up (for example, at the start of a business day), use task based multi-cluster sizing to scale out the warehouse.
Using search optimization for point lookups¶
We recommend adding search optimization when you
perform point lookup queries on your interactive table. Point lookups are queries that filter on a
single column to retrieve one or a few rows of data. A good example is WHERE some_id =
some_UUID.
Materialized view support for interactive tables¶
You can create materialized views on interactive tables. An interactive materialized view precomputes and stores the results of a query on an interactive table, which can further improve query performance for common aggregation patterns.
To create an interactive materialized view, use the INTERACTIVE keyword in the CREATE MATERIALIZED VIEW statement:
After you create the interactive materialized view, you must add both the materialized view and the underlying base table to your interactive warehouse:
Best practices for interactive materialized views¶
Follow these guidelines when you create materialized views using interactive tables:
Interactive Materialized Views work just like regular materialized views. It must be based on an interactive table.
Interactive Materialized View and the source interactive table it’s based on must be added to the same interactive warehouse.
Joins aren’t supported in materialized views, whether interactive or standard. Structure your queries to aggregate or filter from a single base table.
You can’t use an interactive table or an interactive materialized view as the source for a dynamic table.
When you’re considering candidates for interactive materialized views, choose aggregation queries that are frequently run and expensive to compute
Region availability¶
Interactive tables and interactive warehouses are available in the following Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure regions. For more information about Snowflake regions, see Supported cloud regions.
us-east-1- AWS US East (N. Virginia)us-west-2- AWS US West (Oregon)us-east-2- AWS US East (Ohio)ca-central-1- AWS Canada (Central)ap-northeast-1- AWS Asia Pacific (Tokyo)ap-southeast-2- AWS Asia Pacific (Sydney)eu-central-1- AWS EU (Frankfurt)eu-west-1- AWS EU (Ireland)eu-west-2- AWS Europe (London)us-central1- GCP US Central1 (Iowa)us-east4- GCP US East4 (N. Virginia)europe-west2- GCP Europe West2 (London)europe-west3- GCP Europe West3 (Frankfurt)europe-west4- GCP Europe West4 (Netherlands)australia-southeast2- GCP Australia Southeast2 (Melbourne)Azure: all Azure regions.
Task based multi-cluster sizing¶
You can adjust the MIN_CLUSTER_COUNT parameters via scheduled tasks.
It’s recommended to set the MAX_CLUSTER_COUNT using auto-scaling policy to acommendate peak concurrency for your workload..
Dropping an interactive warehouse¶
You can run the DROP WAREHOUSE command to remove an interactive warehouse entirely. Dropping an interactive warehouse removes the associations between that warehouse and any interactive tables. However, you can still use other interactive warehouses to query those same interactive tables.
Benchmarking best practices¶
When assessing the performance of interactive tables in a test environment, follow these best practices to avoid inconsistent or misleading results:
Turn off the query result cache to make the benchmark results consistent between multiple benchmark runs. You can turn off the query result cache at the account, user, and session level by setting the USE_CACHED_RESULT session parameter. That way, the queries only use the table data cache from the interactive warehouse. When you turn result caching on in your production environment, you can expect equal or better performance than in your benchmark testing.
Because an interactive warehouse takes some time to warm the table data cache, wait for a while after you create or resume an interactive warehouse before testing query performance. This simulates the typical production configuration, where the warehouse remains active for long periods. Snowflake applies optimizations to the cache warming process. Therefore, it’s more efficient to let Snowflake complete this process than to warm the cache yourself by running sample queries.
When comparing performance of interactive tables against standard Snowflake tables, don’t interleave the queries between standard and interactive tables. Instead, run the full benchmark on standard tables, then run the same tests on interactive tables.
When doing comparative benchmarks with other database systems, make sure that the clustering columns in your interactive tables match the WHERE clause predicates in your queries. For more information about choosing the best clustering columns, see Clustering Keys & Clustered Tables. In particular, don’t cluster on columns with high cardinality, such as unique IDs or timestamps.
If your queries are short and simple, you can achieve higher concurrency by setting the MAX_CONCURRENCY_LEVEL parameter to a higher value for your interactive warehouse.
Interactive tables and storage lifecycle policies¶
You can use storage lifecycle policies to archive or expire specific table rows based on conditions that you define, such as data age or other criteria.
Currently, you can’t use storage lifecycle policies for interactive tables that use auto-refresh. You can use the TARGET_LAG parameter, or a storage lifecycle policy, but not both.
Disaster recovery and replication¶
When added to a replication group, interactive tables and warehouses are replicated to the target account. Interactive table replication behaves the same as standard table replication.
Interactive warehouse assume Interactive warehouse is supported in the target region. There is no validation of Interactive warehouse replication in the target region at this time. When using interactive warehouses and replication you must ensure that target region supports interactive warehouses.
The Interactive warehouse in target account wil auto-resume upon querying. However, due to cache warming requirements, the performance of the warehouse is not guaranteed. To ensure consistent performance, you must keep the warehouse running in the target region.
Cost and billing considerations¶
Interactive warehouses incur compute charges when active. The minimum billable period for an interactive warehouse is one hour, and at one-second granularity thereafter.
Note
If you resume an interactive warehouse that was suspended (whether manually or through auto-resume), that operation results in a new minimum billable period charge. That charge applies even if you were already being billed for that period because of other recent activity in the warehouse. Therefore, avoid suspending and resuming an interactive warehouse multiple times within a short period. The 24-hour minimum auto-suspend interval helps prevent excessive suspend/resume cycles.
Interactive tables incur standard storage costs. The price for storage of interactive tables is the same as for standard tables per TB. Interactive tables may be larger than equivalent standard tables, due to differences in data encoding and additional indexes. The larger data size and indexes are factored into the storage volume.
For more information about cost and billing for interactive warehouses and interactive tables, see the Snowflake service consumption table.
Limitations of interactive warehouses and interactive tables¶
Limitations of interactive warehouses¶
Snowflake interactive warehouses is optimized for short-running queries. The query timeout for SELECT commands defaults to five seconds. After five seconds, the query is canceled. You can reduce the query timeout value, but you can’t increase it. This is by design to prevent long-running queries from starving the interactive warehouse of resources and degrading the performance of low latency queries. METADATA commands, such as SHOW and INSERT OVERWRITE, aren’t subject to the five-second timeout interval.
If a query consistently times out, that’s a signal that it might not be suitable for use with interactive warehouses. Commonly, applying some of the performance tuning techniques can help reduce query latency. See Interactive table performance considerations for more details.
Interactive warehouses support auto-suspend and auto-resume. The minimum auto-suspend interval is 24 hours (86400 seconds). Manual suspension/resume of interactive warehouse is supported with a minimum 1 hour billing period. Expect significant query latency when you resume an interactive warehouse, because the data cache needs to warm up again. For more information, see Resuming and suspending an interactive warehouse.
You can’t query standard Snowflake tables from an interactive warehouse. To query both standard tables and interactive tables in the same session, run USE WAREHOUSE to switch to the appropriate warehouse type.
You can add up to a maximum of 10 interactive tables to an interactive warehouse. This is a temporary limitation to prevent overloading of the system. This limit will be increased in the future. If you need to add more than 10 interactive tables, please contact Snowflake Support.
You can’t run CALL commands to call stored procedures in an interactive warehouse.
You can’t use the
->>pipe operator. That operator uses stored procedures behind the scenes.
Limitations of interactive tables¶
Interactive tables don’t support the following features:
Data manipulation language (DML) commands such as UPDATE and DELETE. The recommended workflow is to use auto-refresh interactive tables (i.e. by setting TARGET_LAG) and apply DML to the source table instead. The auto-refresh mechanism is more efficient and cost-effective than using DML on the interactive table. The only DML that you can perform is INSERT OVERWRITE.
Fail-safe. This data recovery mechanism isn’t available for interactive tables. However, you can still use Time Travel with interactive tables.
Query insights. They currently aren’t collected or available for queries executing on interactive tables to help reduce query execution latency when queries are executed using SDKs. Snowsight queries still produce insights, but at a cost of higher query latency.
You can’t perform the following operations:
Use an interactive table as the source for a standard (non-interactive) materialized view. To create a materialized view on an interactive table, use the INTERACTIVE keyword. See Materialized view support for interactive tables.
Modify properties of an interactive table by using ALTER TABLE clauses such as ADD COLUMN or REMOVE COLUMN. ALTER TABLE operations that you can perform include:
Renaming the table.
Modifying columns to set or unset comments.
Setting or unsetting masking policies on columns.
Adding or unsetting a masking policy, join policy, aggregation policy, or row access policy on the table.
Adding a storage lifecycle policy to the table, or dropping a storage lifecycle policy from the table.
Use streams with an interactive table.
Create a dynamic table with an interactive table as a base table.
Use the RESAMPLE clause for queries on an interactive table.
Set the Time Travel retention period using CREATE INTERACTIVE TABLE or ALTER TABLE. Interactive tables inherit the DATA_RETENTION_TIME_IN_DAYS value from their parent schema, database, or account.
Affected SQL statements¶
This feature introduces changes to the following Snowflake SQL commands:
ALTER WAREHOUSE: new ADD TABLES and DROP TABLES clauses.
CREATE INTERACTIVE TABLE: creates interactive tables with required CLUSTER BY clause.
CREATE INTERACTIVE WAREHOUSE: creates interactive warehouses with an optional TABLES clause.
CREATE MATERIALIZED VIEW: new optional INTERACTIVE keyword for creating materialized views on interactive tables.