January 2021 - Other

The following deprecated features and behavior changes were introduced this month. If you have any questions about these changes, please contact Snowflake Support.

For more details about the new features, enhancements, and fixes introduced this month, see January 2021.

SQL Changes: General

Materialized Views: Definitions Require Consistent Use of Qualifiers

With this release, when you create a materialized view that has multiple references to the base table, Snowflake will no longer allow you to use a combination of qualified and unqualified identifiers to refer to the base table.

Previously

CREATE MATERIALIZED VIEW allowed you to use a combination of qualified and unqualified references to the base table.

For example, you could use an unqualified identifier (base_table) and a fully-qualified identifier (db.my_schema.base_table) in the definition of the materialized view:

create materialized view mv as
select * from base_table
where ...
union all
select * from db.my_schema.base_table
where ...
Copy

As another example, you could use an identifier qualified by just the schema name (my_schema.base_table) and a fully-qualified identifier (db.my_schema.base_table) in the definition of the materialized view:

create materialized view mv as
select * from my_schema.base_table
where ...
union all
select * from db.my_schema.base_table
where ...
Copy
Currently

The SQL statements above result in errors that indicate that the references to the base table use a combination of qualified and unqualified names:

Failure during expansion of view 'MV': SQL compilation error: Invalid materialized view definition. Source table referenced
differently: with qualified and non-qualified names: base_table, db.my_schema.base_table
Copy

To create the view, you must use qualifiers consistently when referring to the base table in the view definition. For example, you can use unqualified identifiers for all of the references:

create materialized view mv as
select * from base_table
where ...
union all
select * from base_table
where ...
Copy

As an alternative, you can use identifiers qualified by the schema name (for example, if you want the view in a cloned schema to refer to the base table in the original schema):

create materialized view mv as
select * from my_schema.base_table
where ...
union all
select * from my_schema.base_table
where ...
Copy

You can also use fully-qualified identifiers (for example, if you want the view in a cloned database to refer to the base table in the original database):

create materialized view mv as
select * from db.my_schema.base_table
where ...
union all
select * from db.my_schema.base_table
where ...
Copy

Regardless of which option you choose, you must use qualifiers consistently in any references to the base table.

This change is being made to resolve ambiguities in view definitions when cloning the schema or database.

When you clone a database, unqualified identifiers (e.g. base_table) point to the base table in the cloned database. Fully-qualified identifiers (e.g. db.my_schema.base_table) point to the base table in the original database.

Similarly, when you clone a schema, unqualified identifiers (e.g. base_table) point to the base table in the cloned schema. Identifiers qualified by the schema name (e.g. my_schema.base_table) point to the base table in the original schema.

If a view definition uses a mix of qualified and unqualified references to the base table in a cloned database, the references resolve to different base tables (e.g. the base table in the original database and the base table in the cloned database). A materialized view must only have one base table.

Stored Procedures: Splitting Transactions Across Procedures Not Allowed

As previously announced, with this release the following change is introduced:

Previously

Snowflake documents that the following operations are not allowed:

  • Starting a transaction outside a stored procedure and ending inside a stored procedure.

  • Starting a transaction inside a stored procedure and ending outside a stored procedure.

However, these prohibitions were not always enforced.

Currently

Snowflake more stringently enforces these prohibitions.

Existing stored procedures that violate these rules should be updated to comply with the rules for transactions and stored procedures.

Stored Procedures: Setting AUTOCOMMIT in Procedures Not Allowed

As previously announced, with this release Snowflake introduces the following change:

Previously

Although customers should not set the AUTOCOMMIT session parameter inside a stored procedure, this restriction was not enforced.

Currently

Snowflake enforces the prohibition on setting AUTOCOMMIT inside a stored procedure.

Note that changing the AUTOCOMMIT behavior outside a stored procedure will continue to work.

Any command that sets the AUTOCOMMIT session parameter inside a stored procedure should be removed as soon as possible.

When removing AUTOCOMMIT commands from stored procedures, please take into account the current rules for transactions and stored procedures.

Snowflake also introduces a new feature, scoped transactions, that allows stored procedures to use BEGIN, COMMIT, and ROLLBACK in certain situations. You might be able to replace prohibited AUTOCOMMIT changes in some stored procedures with BEGIN and COMMIT/ROLLBACK commands that are no longer prohibited. The new rules are described in the documentation for rules for transactions and stored procedures.

SQL Changes: Commands & Functions

ALTER TABLE Commmand: Swapping Permanent or Transient Tables with Temporary Tables Blocked

The ALTER TABLE … SWAP WITH syntax transfers all metadata and privileges from one table to another in a single transaction.

As announced previously, with this release, swapping a permanent or transient table with a temporary table, which persists only for the duration of the user session in which it was created, is no longer allowed. This behavior changeprevents a naming conflict that could occur when a temporary table is swapped with a permanent or transient table, and an existing permanent or transient table has the same name as the temporary table.

Swapping tables can be achieved using the ALTER TABLE … RENAME TO clause.

For example, swap tables a and b using three ALTER TABLE statements. In this example, table A is a temporary table; however, the RENAME TO clause works on permanent, transient, and temporary tables alike:

CREATE TEMPORARY TABLE a (col1 string);

CREATE TABLE b (col2 string);

ALTER TABLE a RENAME TO c;

ALTER TABLE b RENAME TO a;

ALTER TABLE c RENAME TO b;
Copy

DESCRIBE STREAM, SHOW STREAMS Commands: New Property

As announced previously, with this release, the following new property has been added to the output of the DESCRIBE STREAM and SHOW STREAMS commands as a new row and column, respectively:

Column Name

Data Type

Description

stale_after

TIMESTAMP

Timestamp when the stream became stale or will become stale if the stream is not consumed within the retention period for the source table. If the timestamp is in the future, the value is calculated by adding the retention period for the source table (i.e. the larger of the DATA_RETENTION_TIME_IN_DAYS or MAX_DATA_EXTENSION_TIME_IN_DAYS parameter setting) to the current offset for the stream.

SQL Changes: Information Schema

Views: Change to Query Processing for Selected Views

As previously announced, with this release, processing of the following Snowflake Information Schema views has been improved to prevent query failures:

  • VIEWS

  • OBJECT_PRIVILEGES

  • TABLE_PRIVILEGES

  • REFERENTIAL_CONSTRAINTS

  • TABLE_CONSTRAINTS

  • EXTERNAL_TABLES

Previously

For schemas that contain a large number of tables (or tables with a large number of columns), queries on the views that return a large number of rows could fail. This was due to the schema being processed as a single unit.

Currently

To prevent query failure, Snowflake now divides the schemas into multiple units and each unit is processed individually.

Note that dividing the schema in this way ensures the queries complete successfully; however, this may also impact query performance.

Note

If a DDL operation, such as renaming a schema, is performed while a query is executing on a view in the schema, the query may return inconsistent/unexpected results. This is due to the state of the schema changing between the beginning and end of the query.

QUERY_HISTORY_BY_WAREHOUSE Function: Change to Match Documented Behavior

As previously announced, with this release, the behavior of the QUERY_HISTORY_BY_WAREHOUSE() table function changed as follows:

Previously

The behavior of the function did not match the documentation with respect to quoting the warehouse name. The documentation states:

Note that the warehouse name must be enclosed in single quotes. Also, if the warehouse name contains any spaces, mixed-case characters, or special characters, the name must be double-quoted within the single quotes (e.g. '"My Warehouse"' vs 'mywarehouse') [emphasis added].

The actual behavior is that double quotes within the name were treated as literals, not as indicators to preserve case, etc. And because the name was not treated as a delimited identifier, all characters were treated as uppercase.

For example, the following code caused the server to look for a warehouse with a name that started and ended with a double-quote character and in which all characters were uppercase:

QUERY_HISTORY_BY_WAREHOUSE(...  '"TEstWH"'))
Copy

As a result, the intended warehouse is not found.

Currently

The behavior matches the documentation. To preserve case, whitespace, etc., the warehouse name should be surrounded by double quotes inside the single quotes.

Ecosystem Changes

JDBC: Change to How ResultSet.getDate Uses the Calendar Object for DATE Values

In JDBC, the ResultSet.getDate method supports a Calendar argument:

Date getDate(int columnIndex, Calendar cal) throws SQLException
Copy

WIth this release, the behavior of this method has changed as follows.

Previously

The method used the timezone of the JVM when constructing the millisecond value for the Date object.

Currently

The method uses the timezone of the Calendar object when constructing the millisecond value for the Date object.