2021_10 Bundle¶
This topic describes the following behavior changes (if any) for the month:
Features that were deprecated.
Bundled changes that were enabled.
Other, non-bundled changes that were implemented.
If you have any questions about these changes, please contact Snowflake Support.
For details about the new features, enhancements, and fixes introduced this month, see February 2022.
Important
Unless otherwise noted, these changes are in the 2021_10 bundle, which was enabled by default in the 6.2 release.
Security Changes¶
DESCRIBE USER Command: New RSA_PUBLIC_KEY and RSA_PUBLIC_KEY_2 Columns in Output¶
The output of the DESCRIBE USER command includes two new columns:
RSA_PUBLIC_KEY
RSA_PUBLIC_KEY_2
These two columns facilitate obtaining public keys that are currently set for the user.
SQL Changes — General¶
Constraints: Changes to the RELY Constraint Property and Views¶
The behavior of the RELY constraint property, and two views for constraints (TABLE_CONSTRAINTS View in ACCOUNT_USAGE and TABLE_CONSTRAINTS View in INFORMATION_SCHEMA) have changed as follows:
- Previously
When you created a new constraint:
RELY was the default.
You could not override this by specifying NORELY in the command. If you specified NORELY in the command, NORELY was ignored or an error was thrown.
For existing constraints, you could not change the RELY constraint property.
The following views did not provide information about the RELY constraint property:
TABLE_CONSTRAINTS in ACCOUNT_USAGE.
TABLE_CONSTRAINTS in INFORMATION_SCHEMA.
- Currently
When you create a new constraint:
NORELY is the default.
You can override this by specifying RELY in the command.
Any existing constraints have the NORELY property (regardless of whether or not the constraint currently has the RELY or NORELY property). You can change the constraint property from NORELY to RELY.
The following views include a RELY column that specifies whether or not the RELY constraint property is set:
TABLE_CONSTRAINTS in INFORMATION_SCHEMA. (Update: The RELY column was added in August 2022.)
The RELY column will be added to TABLE_CONSTRAINTS in ACCOUNT_USAGE in an upcoming release.
This change was made to support upcoming improvements to query optimization. These improvements make use of the constraints defined for the table.
With the new default NORELY setting, the query optimizer does not assume that the data in a table complies with the constraints. If you have ensured that the data in the table does comply with the constraints, you can change this to RELY (i.e. to indicate that the query optimizer should expect the data in the table to adhere to the constraints).
Materialized Views: Change to How Views Are Created When Cloning Databases¶
The way in which materialized views are created when you clone a database has changed as follows:
- Previously
When you cloned a database, any materialized views in the cloned database were created with fully qualified names, even if you did not specify the fully qualified name when creating the view in the original database.
For example, suppose that you created a materialized view with the unqualified name
mv
in a databasedb1
.:use database db1; create materialized view mv as ...;
Suppose that you then clone the database
db1
:create database db1_clone clone db1;
The CREATE MATERIALIZED VIEW statement that created the view in the cloned database used the fully qualified name of the view.
You could view this statement by running the SHOW MATERIALIZED VIEWS command:
use database db1_clone; show materialized views;
The column named text contains the text of the command that created this materialized view:
| text +------ ... | create or replace materialized view DB1_CLONE.PUBLIC.MV as ...
As demonstrated in this example, the command used the fully qualified name for the materialized view (DB1_CLONE.PUBLIC.MV
).
- Currently
The CREATE MATERIALIZED VIEW statement in the cloned database does not include the name of the cloned database and schema unless those names were specified in the original CREATE MATERIALIZED VIEW statement.
For example, suppose that you create a materialized view with the unqualified name
mv
in a databasedb1
.:use database db1; create materialized view mv as ...;
Suppose that you then clone the database
db1
:create database db1_clone clone db1;
When you run the command:
use database db1_clone; show materialized views; -- OR -- use database db1_clone; show views;
The CREATE MATERIALIZED VIEW statement in the text column uses the unqualified name for the view because the unqualified name was used in the original CREATE MATERIALIZED VIEW statement:
| text +------ ... | create or replace materialized view mv as ...
This change was made to prevent potential issues when renaming a cloned database. When you rename a cloned database, the original name of the cloned database is not updated in the materialized view.
For example, suppose that you rename the cloned database db1_clone
to db2
:
alter database db1_clone rename to db2;
When you run the following command:
use database db2;
show materialized views;
The command in the text column used the original name of the cloned database (db1_clone
), not the new name of the cloned database
(db2
):
| text
+------ ...
| create or replace materialized view DB1_CLONE.PUBLIC.MV as ...
As a result, querying the materialized view results in an error:
select * from mv;
SQL compilation error: Failure during expansion of view 'MV': Cannot perform SELECT.
This behavior change prevents this error from occurring.
SQL Changes — Commands & Functions¶
SHOW ORGANIZATION ACCOUNTS Command: New Columns in Output¶
To provide a better understanding of the mappings of accounts to billing entities in an organization, the following columns have been added output of the SHOW ORGANIZATION ACCOUNTS command:
Column Name |
Data Type |
Description |
---|---|---|
CONSUMPTION_BILLING_ENTITY_NAME |
TEXT |
The name of the consumption billing entity. |
MARKETPLACE_CONSUMER_BILLING_ENTITY_NAME |
TEXT |
The name of the marketplace consumer billing entity. |
MARKETPLACE_PROVIDER_BILLING_ENTITY_NAME |
TEXT |
The name of the marketplace provider billing entity. |
GET_DDL Function: Changes for Views¶
The GET_DDL function returns a DDL statement that can be used to recreate the specified object, with variations in the query output depending on the specified object in the function argument. The behavior of the GET_DDL function for views has changed as follows:
- Previously
Snowflake returned the exact SQL statement to recreate the view. If the view was a secure view, Snowflake returned the DDL statement to create the view and an ALTER statement to set the SECURE property on the view.
- Currently
Snowflake has updated the query result for views as follows:
The query result returns lowercase SQL text for
create or replace view
, even if the casing in the original SQL statement used to create the view was uppercase or mixed case.The OR REPLACE clause is always included in the CREATE VIEW statement.
SQL comments before the view body (AS) are removed.
The column list is always generated. If a masking policy is set on a column, the result specifies the masking policy for the column.
If the view has a masking policy on one or more of its columns or a row access policy and the role executing the GET_DDL query does not have the global APPLY MASKING POLICY or APPLY ROW ACCESS POLICY privilege, the policy name is replaced with
#unknown_policy
. See the note below.If the view is secure, the query result includes the SECURE property in the CREATE statement; an additional ALTER statement to set the SECURE property is no longer included in the query result.
COPY GRANTS is not included, even if it was specified in the original CREATE VIEW statement.
Ensures the CREATE VIEW statement always includes a semicolon at the end of the statement.
Note
For a view that includes a masking policy, row access policy, or both, the pending query result with the
#unknown_policy
text causes the CREATE VIEW statement to fail if this text is not removed prior to recreating the view. This behavior is expected. The intention of using this text is to indicate that the column or the view is protected by a policy.If the GET_DDL query result includes the
#unknown_policy
text, prior to recreating the view, consult with your internal governance administrator to determine what policies are necessary for the columns or view, edit the GET_DDL query result, and then recreate the view.
INFER_SCHEMA Function: Change to NULL Columns¶
The INFER_SCHEMA function detects the column definitions in a set of staged data files that contain semi-structured data, and retrieves the metadata in a format suitable for creating Snowflake objects.
The condition that determines whether the function returns the NULL or NOT NULL constraint for a column has changed as follows:
- Previously
The INFER_SCHEMA function returned the nullability constraint for a column as indicated in the metadata for the files that include the column. When the input for the function was a single file, the nullability constraint returned for the columns was always correct. However, when a column was identified in the metadata as required but was not included in all of the input files, the function still returned a NOT NULL constraint for the column. This logic could result in errors when loading all of the files into a table created using the INFER_SCHEMA function output.
When creating a table with the column definitions derived from a set of staged data files (using the CREATE TABLE … USING TEMPLATE syntax), all columns in the table were defined as nullable.
- Currently
The INFER_SCHEMA function returns a column as nullable if the column is missing or indicated as optional from any input files. The function returns a column as non-nullable only if the column is identified as required in all input files.
The GENERATE_COLUMN_DESCRIPTION function and CREATE TABLE … USING TEMPLATE command syntax follows the same nullability behavior as the INFER_SCHEMA function.
SQL Changes — Usage Views & Information Schema¶
ACCESS_HISTORY View: Support for Write Operations¶
The behavior of the ACCESS_HISTORY View in the ACCOUNT_USAGE schema has changed as follows:
- Previously
The ACCESS_HISTORY view only supported SQL read operations.
- Currently
The ACCESS_HISTORY view supports SQL read and write operations as follows:
Additional rows have been included in the query output of the view to indicate that write operations occurred.
A new column, OBJECTS_MODIFIED, of ARRAY data type, specifies objects that were modified in the write portion of a SQL query.
If a stage was accessed, the
objectDomain
field specifies the value STAGE.If a stage was accessed in the read portion of the query, the DIRECT_OBJECTS_ACCESSED and BASE_OBJECTS_ACCESSED columns have been updated as follows:
A new JSON field,
stageKind
, specifies the stage.The
objectName
andobjectId
fields specify the corresponding values for a user, table, and named stage.
For details on supported and not supported write operations, see the notes below.
Note the following:
The OBJECTS_MODIFIED column returns an array in the following format:
{ "columns": [ { "columnName": <string>, "columnId": <number> }, { "columnName": <string>, "columnId": <number> } ... ], "objectId": <number>, "objectName": <string>, "objectDomain": TABLE | STAGE, "location": <string>, "stageKind": Table | User | Internal Named | External Named }
If a stage was accessed in the write portion of the query:
The
objectId
value is as follows:The
objectName
value is as follows:User stage: the value is the
username
.Table stage: the value is the
table_name
.Named stage: the value is the
stage_name
.
If a stage was accessed in the write portion of the query, the BASE_OBJECTS_ACCESSED and DIRECT_OBJECTS_ACCESSED columns include the following JSON fields:
{ "objectDomain": STAGE "objectName": <string>, "objectId": <number>, "stageKind": <string> }
The possible values for the field names in these two columns are the same as the OBJECTS_MODIFIED column.
Snowflake supports the following write operations in the ACCESS_HISTORY view:
GET <internal_stage>
PUT <internal_stage>
DELETE
INSERT
INSERT INTO … FROM SELECT *
INSERT INTO TABLE … VALUES ()
MERGE INTO … FROM SELECT *
UPDATE
UPDATE TABLE … FROM SELECT * FROM …
UPDATE TABLE … WHERE …
Data loading statements:
COPY INTO TABLE FROM internalStage
COPY INTO TABLE FROM externalStage
COPY INTO TABLE FROM externalLocation
Data unloading statements:
COPY INTO internalStage FROM TABLE
COPY INTO externalStage FROM TABLE
COPY INTO externalLocation FROM TABLE
CREATE:
CREATE DATABASE … CLONE
CREATE SCHEMA … CLONE
CREATE TABLE … CLONE
CREATE TABLE … AS SELECT
Snowflake does not support the following write operations in the ACCESS_HISTORY view:
The operations to populate views, materialized views, and streams.
Data movement resulting from replication.
COPY_HISTORY: Consistent STATUS Column Letter Case in Output¶
The status of a data load is reported in the STATUS column in the output of the COPY_HISTORY table function in the Information Schema and the COPY_HISTORY View in ACCOUNT_USAGE. The values returned in the STATUS column have changed as follows:
- Previously
For bulk data loads (COPY INTO <table> statements), the status values were returned with the first letter of the first word in uppercase and the remaining words in all lowercase, as documented:
Loaded
,Load failed
, etc.For Snowpipe data loads, the status values were returned in uppercase:
LOADED
,LOAD FAILED
, etc.- Currently
For both bulk and Snowpipe data loads, the status values are returned with the first letter of the first word in uppercase and the remaining words in all lowercase.
This change applies consistency to the STATUS column values and brings them into alignment with the product documentation.
Extensibility Changes¶
Java UDFs: Changes to Scala JAR File Inclusion Criteria¶
Java UDF behavior has changed as follows:
- Previously
For Java UDFs, Scala JAR files were included in the JVM classpath.
- Currently
For Java UDFs, Scala libraries are no longer included in the classpath. If any of your Java UDF code depends on Scala libraries, include the Scala JAR files in the imports list when you create new UDFs or replace existing UDFs. For example:
create or replace function add(x integer, y integer) returns integer language java imports = ('@stage/scala-library-2.12.jar') handler='TestAddFunc.add'
UDFs created using the Snowpark Scala library are not affected.
Data Loading Changes¶
COPY INTO <table> Command: MATCH_BY_COLUMN_NAME Copy Option Returns Error When Loading CSV Data¶
The MATCH_BY_COLUMN_NAME copy option enables loading semi-structured data into separate columns in the target table that match corresponding columns represented in the source data. The copy option does not support loading data from comma-separated values (CSV) files.
The behavior when attempting to load CSV-formatted data with the MATCH_BY_COLUMN_NAME copy option set to either CASE_SENSITIVE or CASE_INSENSITIVE has changed as follows:
- Previously
The COPY INTO <table> statement did not return an error when used with CSV file formats, but the MATCH_BY_COLUMN_NAME setting did not affect the data load and is ignored.
- Currently
The COPY INTO <table> statement returns a user error as the option does not support CSV files.
COPY INTO <location> Command: Explicit Column Casts Ignored When Unloading to Parquet Data¶
When unloading numeric table data to Parquet files, calling the CAST function in the COPY INTO <location> statement enables you to choose Snowflake data types that map to Parquet data types.
The behavior when explicitly casting numeric column data to Parquet files has changed as follows:
- Previously
When at least one numeric column was explicitly cast to a data type that did not map to a Parquet data type, the data unload operation ignored any explicit casts that did map to Parquet data types. Fixed-point number columns were unloaded as DECIMAL columns; while floating-point number columns were unloaded as DOUBLE columns.
- Currently
Data unload operations honor all explicit casts of numeric column data, regardless of whether the target Snowflake data types map to Parquet data types.
Data Sharing Changes¶
Managed Accounts: Changes to Support New Account Name Format¶
Snowflake introduces a new URL based on an updated managed account name, which customers can choose and change. The URL has the following format:
<organization_name>-<managed_account_name>.snowflakecomputing.com
The new managed account name has 2 new naming rules:
Underscore (_) is the only valid delimiter in the name.
The name must be unique to the organization that the managed account is in.
Most existing managed account names already follow the new naming rules and the names will stay the same. Managed account names that did not follow these rules were automatically updated as follows:
If the managed account name contained non-underscore separators, they were converted to underscores. For example if the managed account name was
managed account-1
, the new managed account name ismanaged_account_1
.If the managed account name was not unique to the organization, the locator name was appended to the managed account name. For example if the managed account name was
managed
with a locator ofRE47190
, the new managed account name ismanaged_RE47190
.
The updated managed account name is used in all managed account commands:
CREATE MANAGED ACCOUNT enforces the new naming rules.
SHOW MANAGED ACCOUNTS shows the updated managed account name in the name column.
DROP MANAGED ACCOUNT uses the updated managed account name as a parameter.
SHOW MANAGED ACCOUNTS Command: New Account Name Field in Output¶
The output of the SHOW MANAGED ACCOUNTS command has change as follows:
- Previously
The url column displayed the account locator URL in the following format:
<account_locator>.snowflakecomputing.com
- Currently
The url column displays the account name URL using the new URL format introduced in the organizations feature. The new URL has the following format:
<organization_name>-<managed_account_name>.snowflakecomputing.com
In addition, the output includes a new column, account_locator_url
, to display the account locator URL.
Note
Depending on the region and cloud platform where your account is hosted, the account locator URL may have additional segments as follows:
<account_locator>.<region_code>.<cloud>.snowflakecomputing.com
The existing account locator URL will continue to work as before in addition to the new URL.