2022_07 Bundle

This topic describes the following behavior changes (if any) for the month:

  • Features that were deprecated.

  • Bundled changes that were enabled.

  • Other, non-bundled changes that were implemented.

If you have any questions about these changes, please contact Snowflake Support.

For details about the new features, enhancements, and fixes introduced this month, see October 2022.

Important

Unless otherwise noted, these changes are in the 2022_07 bundle, which was enabled by default in the 6.35 release.

SQL Changes — Commands & Functions

Databases & Schemas: Dropping or Replacing Not Allowed if it results in Dangling References for Policies and Tags

The behavior of the operations for dropping or replacing a database/schema with respect to a masking policy, tag, and protected column in a table has changed as follows:

Previously:

When the tag and policy are in the same schema and the table is in a different schema, Snowflake allowed the DROP and REPLACE operations on the schema/database that contains a tag and masking policy when the protected column in the table exists in a different schema/database.

The behavior applied to the following four commands:

Currently:

If the same scenario occurs now, Snowflake does not allow the DROP and REPLACE operations on the schema/database that contains a tag and masking policy when the protected column in the table exists in a different schema/database.

As a result, the behavior of the four commands listed above has changed.

For example:

  • A tag named t1 and masking policy named p1 exist in the schema named governance.tags.

  • The p1 masking policy is assigned to the t1 tag (i.e. tag-based masking policy).

  • The t1 tag is assigned to a table named finance.accounting.customers.

Previously, Snowflake allowed the DROP SCHEMA operation on the governance.tags schema and the DROP DATABASE operation on the governance database while the t1 tag was assigned to the finance.accounting.customers table.

Now, Snowflake does not allow either operation to be performed while the t1 tag is assigned to the table. Depending on the operation attempted, Snowflake returns one of the following error messages:

  • DROP DATABASE and CREATE OR REPLACE DATABASE:

    Cannot drop or replace database because: Tag governance.tags.tag1 used by schema finance.accounting in another database

  • DROP SCHEMA and CREATE OR REPLACE SCHEMA:

    Cannot drop or replace schema because: Tag governance.tags.tag1 used by another schema finance.accounting

CREATE MATERIALIZED VIEW Command: Time Travel Clauses No Longer Allowed

One limitation of materialized views is that Time Travel is not supported. However, when you run the CREATE MATERIALIZED VIEW command, it was possible to specify a Time Travel clause (e.g. AT) for the base table of the view.

Specifying a Time Travel clause in CREATE MATERIALIZED VIEW now results in an error.

Previously:

Specifying a Time Travel clause in CREATE MATERIALIZED VIEW did not produce an error.

For example, the following statements executed successfully without any errors:

  • Example 1:

    create or replace materialized view mv as select * from basetbl at(offset => -2);
    
    Copy
  • Example 2:

    create or replace materialized view mv as select * from basetbl at(timestamp => $ts);
    
    Copy
  • Example 3:

    create or replace materialized view mv as select * from basetbl at(statement => $uuid_dml);
    
    Copy
Currently:

Specifying a Time Travel clause in CREATE MATERIALIZED VIEW now produces the following error:

002274 (42601): SQL compilation error: Invalid materialized view: Time travel on base table in line X at position Y.

SQL Changes — Usage Views & Information Schema Views/Table Functions

GRANT_TO_ROLES View (Account Usage): Changes to the View

The following changes have been introduced to the ACCOUNT_USAGE.GRANTS_TO_ROLES view.

Previously:

The output of the view included privilege grants to roles on temporary tables.

Currently:

The output of the view does not include privilege grants to roles on temporary tables.

Data Lake Changes

CREATE EXTERNAL TABLE Command: User-specified Partitions and Automatically Refreshed Metadata

Defining the partitions in an external table as user-specified means that you choose to add and remove partitions selectively rather than automatically add partitions for all new files in an external storage location that match an expression. This partition type is specified by including the PARTITION_TYPE = USER_SPECIFIED parameter when you create an external table. User-specified partitioning does not support the automatic refreshing of external table metadata.

When a CREATE EXTERNAL TABLE statement is executed with both the PARTITION_TYPE = USER_SPECIFIED and AUTO_REFRESH = TRUE parameters set, the behavior has changed as follows:

Previously:

The CREATE EXTERNAL TABLE statement was successful; however, any event notifications received from cloud storage for the external table (e.g. “new object” messages) produced an error.

Currently:

The CREATE EXTERNAL TABLE statement returns a user error.

GET_DDL Function: Returns TABLE_FORMAT Parameter for External Tables on Delta Lake

When the input for the GET_DDL function is an external table that references a Delta Lake table, the CREATE EXTERNAL TABLE statement returned by the function has changed as follows:

Previously:

The statement omitted the TABLE_FORMAT = DELTA parameter that identifies the external table as referencing a Delta Lake table.

Currently:

The statement includes the TABLE_FORMAT = DELTA parameter.

Extensibility Changes

Snowpark for Python: Returns Errors Earlier When Adding Invalid Packages

When adding a Python package to a Python Snowpark session, the user gets an error message if the package or its specified version is not supported by Snowflake.

The time when the error message is received has changed to be earlier:

Previously:

The error was received only when the user tried to register a UDF or stored procedure.

Currently:

The error occurs earlier, when add_packages is used to add the Python package.

For example, calling "session.add_packages('numpy==21.21.21')" results in "ValueError" because the package version is not valid.

Snowpark for Scala and Java: Change to Types of Members in DeleteResult, MergeResult, and UpdateResult

In the Snowpark Scala and Java APIs, the DeleteResult, MergeResult, and UpdateResult classes provide value members and getter methods that return the number of rows that have been inserted, updated, and deleted:

  • In the Snowpark Scala API, these value members are named:

    • rowsInserted

    • multiJoinedRowsUpdated

    • rowsUpdated

    • rowsDeleted

  • In the Snowpark Java API, these getter methods are named:

    • getRowsInserted

    • getMultiJoinedRowsUpdated

    • getRowsUpdated

    • getRowsDeleted

In the 1.7.0 release of the Snowpark Library for Scala and Java, the types of these value members and the return types of these getter methods has changed:

Prior to 1.7.0:
  • In the Scala API, the type is Int.

  • In the Java API, the type is int.

Starting with 1.7.0:
  • In the Scala API, the type is Long.

  • In the Java API, the type is long.