9.21 Release notes: Jul 29, 2025-Aug 01, 2025¶
Attention
This release has been completed. For differences between the in-advance and final versions of these release notes, see the Release notes change log.
Security updates¶
GENERATE_SYNTHETIC_DATA: Consistency secret now optional in most cases¶
Previously, when you called GENERATE_SYNTHETIC_DATA with a replace column property,
you needed to provide a SECRET for consistency_secret
. With this change, consistency_secret
is now optional. However,
if you run GENERATE_SYNTHETIC_DATA in an owner’s rights stored procedure, you still must provide a value to consistency_secret
.
SQL updates¶
Account Usage: TABLE_QUERY_PRUNING_HISTORY and COLUMN_QUERY_PRUNING_HISTORY views (General availability)¶
You can monitor data access patterns at the table and column level by querying two new Account Usage views:
TABLE_QUERY_PRUNING_HISTORY provides a breakdown of query execution time and pruning by table, query-hash, and warehouse.
COLUMN_QUERY_PRUNING_HISTORY returns an equivalent pruning summary that is aggregated by column name.
The SEARCH_IP function supports searching for IPv6 addresses¶
You can use the SEARCH_IP function to search for IPv6 addresses in data. Previously, the function only supported searching for IPv4 addresses.
For more information, see SEARCH_IP.
Generating YAML for a semantic view and creating a semantic view from YAML¶
To generate the YAML specification for a semantic view, you can call the SYSTEM$READ_YAML_FROM_SEMANTIC_VIEW function. For example:
SELECT SYSTEM$READ_YAML_FROM_SEMANTIC_VIEW(
'my_db.my_schema.tpch_rev_analysis'
);
You can also create a semantic view from a YAML specification by calling the SYSTEM$CREATE_SEMANTIC_VIEW_FROM_YAML stored procedure. For example:
CALL SYSTEM$CREATE_SEMANTIC_VIEW_FROM_YAML(
'my_db.my_schema',
$$
name: TPCH_REV_ANALYSIS
description: Semantic view for revenue analysis
...
$$
);
For information, see:
Data pipeline updates¶
Snowpark Connect for Spark and Snowpark Submit (Preview)¶
With Snowpark Connect for Spark, you can run Spark DataFrame, SQL, and UDF APIs directly on the Snowflake platform using the same Spark code you use today. You can develop using client tools such as Snowflake Notebooks, Jupyter Notebooks, and others. With Snowpark Submit, you can run Spark workloads in a non-interactive, asynchronous way directly on Snowflake’s infrastructure while you use familiar Spark semantics.
Snowpark Connect for Spark and Snowpark Submit are in Preview.
For more information, see Run Spark workloads on Snowflake with Snowpark Connect for Spark.
Release notes change log¶
Announcement |
Update |
Date |
---|---|---|
Release notes |
Initial publication (preview) |
Jul 25, 2025 |
Creating semantic views from YAML and reading YAML for semantic views |
Added to SQL updates |
Jul 29, 2025 |
Tracing SQL statements run from handler code (General availability) |
Added to Extensibility updates |
Aug 01, 2025 |
Tracing SQL statements run from handler code (General availability) |
Moved to 9.22 release notes |
Aug 06, 2025 |