2022_04 Bundle¶
This topic describes the following behavior changes (if any) for the month:
Features that were deprecated.
Bundled changes that were enabled.
Other, non-bundled changes that were implemented.
If you have any questions about these changes, please contact Snowflake Support.
For details about the new features, enhancements, and fixes introduced this month, see July 2022.
Important
Unless otherwise noted, these changes are in the 2022_04 bundle, which was enabled by default in the 6.25 release.
SQL Changes — General¶
Cloning a Permanent Schema With Permanent Child Tables to Create a Transient Schema¶
When a transient schema is created by cloning a permanent schema, the child objects in the schema are also cloned.
The type of the child table objects in the cloned transient schema has changed as follows:
- Previously:
All permanent tables in the source schema were cloned as permanent tables in the target schema, and all transient tables were cloned as transient tables.
- Currently:
All permanent tables in the source schema are cloned as transient tables in the target schema.
SQL Changes — Commands & Functions¶
SHOW EXTERNAL TABLES Command: New Columns in Output¶
The following two new columns have been added to the output of the SHOW EXTERNAL TABLES command:
TABLE_FORMAT
LAST_REFRESH_DETAILS
To help limit the impact of this change, the columns were added as the last columns in the output.
The columns were added to support future functionality.
SHOW SCHEMAS Command: Change to RETENTION_TIME Output for Schemas¶
The value of the RETENTION_TIME column in the output of the SHOW SCHEMAS command for schemas in a database with a DATA_RETENTION_TIME_IN_DAYS setting of 0 has changed as follows:
- Previously:
The RETENTION_TIME value was an empty string.
- Currently:
The RETENTION_TIME value is 0.
SHOW WAREHOUSES Command: New Columns in Output¶
The following new columns have been added to the output of the SHOW WAREHOUSES command for accounts that have the Query Acceleration Service feature enabled:
Column Name |
Description |
---|---|
|
Whether the query acceleration service is enabled for the warehouse. |
|
Maximum scale factor for the query acceleration service. |
The new columns were added between the comment
and resource_monitors
columns. Queries that depend on the output of the
SHOW WAREHOUSES command should use the column name rather than a hard-coded index for the column output.
GET_DDL Function: Changes to the Output for Functions and Procedures¶
Currently, when you call the GET_DDL function to get the DDL statement that created a UDF, external function, or stored procedure, the name of the function or procedure is enclosed in double quotes, even if the name follows the rules for unquoted object identifiers.
This output has changed in cases when you return the fully qualified name of the function or procedure (i.e. when you call GET_DDL with TRUE as the third argument):
- Previously:
GET_DDL returned the name of the function or procedure in quotes:
+-------------------------------------------------------+ | GET_DDL('FUNCTION', 'MYFUNC(FLOAT)', TRUE) | |-------------------------------------------------------| | CREATE OR REPLACE FUNCTION MYDB.MYSCHEMA."MYFUNC" ... | +-------------------------------------------------------+
- Currently:
GET_DDL returns the name of the function or procedure without quotes:
+-------------------------------------------------------+ | GET_DDL('FUNCTION', 'MYFUNC(FLOAT)', TRUE) | |-------------------------------------------------------| | CREATE OR REPLACE FUNCTION MYDB.MYSCHEMA.MYFUNC ... | +-------------------------------------------------------+
Note that this only affects cases in which you return the fully-qualified name of the function or procedure. If you omit the third argument to GET_DDL (or specify FALSE), GET_DDL returns the name of the function or procedure in quotes:
+-------------------------------------------------------+ | GET_DDL('FUNCTION', 'MYFUNC(FLOAT)') | |-------------------------------------------------------| | CREATE OR REPLACE FUNCTION "MYFUNC" ... | +-------------------------------------------------------+
SQL Changes — Usage Views & Information Schema Views/Table Functions¶
POLICY_REFERENCES View (Account Usage): New Columns¶
To support tag-based masking policies, the POLICY_REFERENCES view (in the ACCOUNT_USAGE schema in the shared SNOWFLAKE database) now includes the following columns:
tag_name
tag_database
tag_schema
policy_status
With these new columns, note the following:
The columns, their data types, and descriptions match the same columns in the Information Schema POLICY_REFERENCES table function.
For existing rows in the view, Snowflake returns NULL for the new columns.
This update only adds new columns to the view. You can use the tag-based masking policy feature without enabling this behavior change provided that your Snowflake account is Enterprise Edition (or higher).
To help limit the impact of this change, these new columns were added as the last columns in the output.
QUERY_HISTORY View (Account Usage): New Columns¶
The following new columns have been added to the Account Usage QUERY_HISTORY view:
Column Name |
Data Type |
Description |
---|---|---|
|
NUMBER |
Number of bytes scanned by the query acceleration service. Default value is 0 if the query was not accelerated. |
|
NUMBER |
Number of partitions scanned by the query acceleration service. Default value is 0 if the query was not accelerated. |
|
NUMBER |
Upper limit scale factor that a query would have benefited from. Default value is 0 if the query was not accelerated. |
To help limit the impact of this change, the new columns were added as the last columns in the output.
Data Pipeline Changes¶
ALTER STREAM Command: Setting APPEND_ONLY or INSERT_ONLY Parameter No Longer Allowed¶
The stream type cannot be changed after a stream is created. The type is set as follows when the stream is created:
Set APPEND_ONLY = TRUE to create an append-only stream.
Set INSERT_ONLY = TRUE to create an insert-only stream.
Omit both parameters to create a standard (delta) stream.
Attempting to change the type of an existing stream using the ALTER STREAM command now returns a user error.
To change the type of an existing stream, you must recreate the stream (using CREATE OR REPLACE STREAM) and specify the desired stream type.
Tasks: Error Message Changes¶
The user error messages returned when attempting invalid SQL actions related to serverless tasks (i.e. tasks that run using Snowflake-managed compute resources) have changed as follows:
Use Case 1: Using a role that is not granted the global EXECUTE MANAGED TASK privilege, execute a CREATE TASK statement and omit the WAREHOUSE parameter.
- Previous Error Text:
Missing option(s): [WAREHOUSE]
- Current Error Text:
WAREHOUSE not specified and missing serverless task privilege to create task {task name}. To create it as a user-managed task, specify a WAREHOUSE. To create it as a serverless task, execute the CREATE TASK command with a role that has been granted the 'EXECUTE MANAGED TASK' account-level privilege.
Use Case 2: Using a role that is not granted the global EXECUTE MANAGED TASK privilege, clone a serverless task (or a database or schema that contains one or more serverless tasks) using the appropriate CREATE … CLONE command.
- Previous Error Text:
Task {task name} requires a warehouse.
- Current Error Text:
WAREHOUSE not specified and missing serverless task privilege to create task {task name}. To create it as a user-managed task, specify a WAREHOUSE. To create it as a serverless task, execute the CLONE command with a role that has been granted the 'EXECUTE MANAGED TASK' account-level privilege.
Use Case 3: Using a role that is not granted the global EXECUTE MANAGED TASK privilege, unset the WAREHOUSE parameter for an existing task that runs using customer-managed compute resources (using an ALTER TASK … UNSET WAREHOUSE statement).
- Previous Error Text:
Task {task name} requires a warehouse.
- Previous Error Text:
Cannot UNSET WAREHOUSE on task {task_name} because its owner role has not been granted the 'EXECUTE MANAGED TASK' account-level privilege. Grant this privilege to the role or use GRANT OWNERSHIP to change the task's owner role to one with this privilege.
Use Case 4:
Using a role is granted the global EXECUTE MANAGED TASK privilege (along with other minimum privileges), create and resume a serverless task.
The EXECUTE MANAGED TASK privilege is revoked from the owner role (the role that has the OWNERSHIP privilege on the task).
The task is not paused and starts its next scheduled run, or a user with the owner role executes the EXECUTE TASK command to attempt to start a task run.
- Previous Error Text:
Cannot execute task, USAGE privilege on the task's warehouse must be granted to the owner role
- Current Error Text:
Cannot execute task, EXECUTE MANAGED TASK privilege must be granted to the owner role
These changes are intended to help you better understand and resolve issues related to serverless tasks.
Data Privacy Changes¶
Classification: Updates to Data Classification Model and Revised Output¶
Data classification is now generally available (GA) in all Enterprise Edition (or higher) accounts on AWS and Azure.
For the GA of the feature, the data classification model has been updated to generate improved prediction model and data pattern results. In addition, the data classification process now includes output for every table column specified in the input, including:
Columns with data types that were previously unable to be classified.
Columns with only NULL values.
These improvements were introduced through the behavior change process because they likely return better, but potentially different, results when re-classifying data that was classified using the previous data classification model. During the opt-out phase for the 2022_04 bundle, you can enable/disable the bundle to test the classification improvements while also minimizing their impact on your production accounts until you are familiar with the new results.