August 2022¶

The following new features, behavior changes, and updates (enhancements, fixes, etc.) have been introduced this month. If you have any questions about these additions, please contact Snowflake Support.

Important

Each release may include updates that require the web interface to be refreshed.

As a general practice, to ensure these updates do not impact your usage, we recommend refreshing the web interface after each Snowflake release has been deployed.

New Features¶

Support for GEOMETRY Data Type — Preview¶

With this release, we are pleased to introduce a preview of support for the new GEOMETRY data type. The GEOMETRY data type represents features in a planar (Euclidean, Cartesian) coordinate system. This feature is available to all Snowflake accounts.

This release provides functions for constructing, formatting, measuring, and computing relationships between GEOMETRY objects. Using these functions, you can construct GEOMETRY objects from data in standard formats, including WKT, WKB, and GeoJSON.

For more information, see Geospatial Data Types.

Preview features are intended for evaluation and testing purposes. These features are not recommended for use in production.

Snowpark Stored Procedures in Java — General Availability¶

With this release, we are pleased to announce general availability for Snowpark stored procedures written in Java on Amazon Web Services (AWS) and Microsoft Azure. Stored procedures for Java are available for preview on Google Cloud Platform (GCP).

In your stored procedure, you can use the Snowpark API for Java to host your data pipelines in Snowflake. For example, you can write stored procedures in cases where you need to execute your Snowpark code without running a client application (e.g. from a task).

For more information, see Writing Stored Procedures in Java.

Snowpark Stored Procedures in Scala — General Availability¶

With this release, we are pleased to announce general availability for Snowpark stored procedures written in Scala on Amazon Web Services (AWS) and Microsoft Azure. Stored procedures for Scala are available for preview on Google Cloud Platform (GCP).

In your stored procedure, you can use the Snowpark API for Scala to host your data pipelines in Snowflake. For example, you can write stored procedures in cases where you need to execute your Snowpark code without running a client application (e.g. from a task).

For more information, see Writing Stored Procedures in Scala.

Account Replication — Preview¶

With this release, we are pleased to announce an open preview of Account Replication. This feature introduces two new Snowflake objects, replication group and failover group. A replication group allows customers to specify which account objects to replicate, to which regions or cloud platforms, at customizable scheduled intervals. A failover group enables the replication and failover of the account objects in a group. The objects in a group are replicated with point-in-time consistency from a source account to one or more target accounts.

Account objects can include warehouses, users, and roles, along with databases and shares (see Replicated Objects for the full list of objects that can be included in a replication or failover group). Account objects can be grouped in one or multiple groups.

In the case of failover, account replication enables the failover of your entire account to a different region or cloud platform. Each replication and failover group has its own replication schedule, allowing you to set the frequency for replication at different intervals for different groups of objects. In the case of failover groups, it also enables failover of groups individually. You can choose to failover all failover groups, or only select failover groups.

SQL Updates¶

INFER_SCHEMA Function: New Parameters¶

In this release, Snowflake provides two new arguments for the INFER_SCHEMA function:

FILES = ( 'file_name' [ , 'file_name' ... ] )

Specifies a list of one or more file names (separated by commas) in a set of staged files that contain semi-structured data.

IGNORE_CASE = TRUE | FALSE

Specifies whether column names detected from stage files are treated as case sensitive. By default, the value is FALSE which means that Snowflake preserves the case of alphabetic characters when retrieving column names. If you specify the value as TRUE, column names are treated as case-insensitive and all column names are retrieved as uppercase letters.

In a future release, the output of the INFER_SCHEMA function will include a new ORDER_ID column that indicates the column order in the staged files. For more information, see the Behavior Change Log.

CREATE EXTERNAL TABLE: Derive Column Definitions from Staged Files — Preview¶

With this release, Snowflakes supports creating new external tables with the column definitions derived from a set of staged files containing semi-structured data. Include the USING TEMPLATE keywords in your CREATE EXTERNAL TABLE statements. Previously, this capability was limited to tables.

For example:

CREATE OR REPLACE EXTERNAL TABLE myet
    USING TEMPLATE (
      SELECT ARRAY_AGG(OBJECT_CONSTRUCT(*))
        FROM TABLE(
          INFER_SCHEMA(
            LOCATION=>'@filestage',
            FILE_FORMAT=>'ffparquet'
          )
        )
    )
    location=@filestage
    file_format=ffparquet
    auto_refresh=false;
Copy

New SQL Functions¶

The following function(s) were introduced in recent releases:

Function Category

New Function

Description

Geospatial (Transformation)

ST_DIFFERENCE

Given two input GEOGRAPHY objects, returns a GEOGRAPHY object that represents the points in the first object that are not in the second object (i.e. the difference between the two objects).

Geospatial (Transformation)

ST_INTERSECTION

Given two input GEOGRAPHY objects, returns a GEOGRAPHY object that represents the set of points that are common to both input objects (i.e. the intersection of the two objects).

Geospatial (Transformation)

ST_SYMDIFFERENCE

Given two input GEOGRAPHY objects, returns a GEOGRAPHY object that represents the set of points from both input objects that are not part of the intersection of the objects (i.e. the symmetric difference of the two objects).

Geospatial (Transformation)

ST_UNION

Given two input GEOGRAPHY objects, returns a GEOGRAPHY object that represents the combined set of all points from both objects (i.e. the union of the two objects).

Geospatial (Accessor)

ST_ENDPOINT

Returns the last Point in a LineString.

Geospatial (Accessor)

ST_POINTN

Returns the Point at a specified index in a LineString.

Geospatial (Accessor)

ST_STARTPOINT

Returns the first Point in a LineString.

Data Pipeline Updates¶

Tasks: Support for Specifying Multiple Predecessor Tasks in a Single SQL Statement¶

A Directed Acyclic Graph (DAG) of tasks consists of a single root task and a set of child tasks, organized by their dependencies. All tasks in a DAG except for the root task and its direct children can have multiple predecessor tasks. A task runs only after all of its predecessor tasks have run successfully to completion.

With this release, you can specify multiple predecessor tasks when creating a new task (using CREATE TASK … AFTER) or later (using ALTER TASK … ADD AFTER). The existing SQL command syntax now supports a comma-separated list of tasks.

The following example creates serverless task a4 and specifies tasks a1, a2, and a3 as predecessor tasks. The task executes a trivial query:

CREATE TASK a4
  AFTER a1, a2, a3
  AS SELECT 1;
Copy

The following example modifies existing task b4 and specifies tasks b1, b2, and b3 as predecessor tasks:

ALTER TASK b4 ADD AFTER b1, b2, b3;
Copy

Previously, users were limited to specifying a single predecessor task when creating tasks or modifying tasks.

Organization Updates¶

ORGADMIN Role — General Availability¶

With this release, we are pleased to announce the general availability of the ORGADMIN role in Snowflake. Organization administrators with the ORGADMIN role can:

  • View all Snowflake accounts and regions under an organization

  • Create new Snowflake accounts under an organization

  • Rename Snowflake accounts in an organization

  • Access the ORGANIZATION_USAGE schema, which provides usage and billing data for all Snowflake accounts that belong to an organization

For organizations with multiple accounts, the ORGADMIN role is available in the first account that was created, unless you requested that it be added to a different account. You can verify whether the ORGADMIN is available by finding it in the list of roles in the web interface.

The account administrator (i.e. a user with the ACCOUNTADMIN role) can grant the ORGADMIN role to themselves or to any other user by executing the following command:

GRANT ROLE ORGADMIN TO USER user1;
Copy

For more information, see Introduction to organizations.

ORGANIZATION_USAGE Views for Monitoring Costs — General Availability¶

Win this release, we are pleased to announce the general availability of the following views in the ORGANIZATION_USAGE schema in the SNOWFLAKE shared database. You can use these views to break down costs at the object level across all the accounts in your organization:

  • Storage:

    • DATABASE_STORAGE_USAGE_HISTORY

    • STAGE_STORAGE_USAGE_HISTORY

  • Data transfer:

    • DATA_TRANSFER_HISTORY

  • Serverless features:

    • AUTOMATIC_CLUSTERING_HISTORY

    • MATERIALIZED_VIEW_REFRESH_HISTORY

    • PIPE_USAGE_HISTORY

    • REPLICATION_USAGE_HISTORY

    • SEARCH_OPTIMIZATION_HISTORY

Support for Renaming Accounts¶

With this release, an organization administrator can use the ALTER ACCOUNT command to rename an account without needing to contact SnowflakeSupport. The following example renames an account called original_acctname to new_acctname:

ALTER ACCOUNT original_acctname RENAME TO new_acctname;
Copy

By default, Snowflake continues to allow you to access the account using the old URL with the previous name, while also supporting a new URL that contains the new account name. You have the option of dropping the old URL to require access using the new URL. For details, see Managing accounts in your organization.

Data Governance Updates¶

Access History: View Shared Object Access in Consumer Accounts¶

In this release, Snowflake adds the ability to view access of objects shared using Snowflake Secure Data Sharing by users in consumer accounts. With this update, Snowflake records the object accessed in the share in the base_objects_accessed column of the ACCESS_HISTORY View in Account Usage.

For example, if the provider shares a table and a view built from the table to the consumer account, and there is a query on the shared view, Snowflake records the shared view access in the base_objects_accessed column. This record, which includes the columnName and objectName values, allows the consumer to know which object was accessed in their account and also protects the provider because the underlying table (via the objectId and columnId) is not revealed to the consumer.

Masking Policies & Row Access Policies: Support Added for IS_ROLE_IN_SESSION in Mapping Table Lookups¶

With this release, Snowflake enhances the versatility of masking and row access policies by allowing the IS_ROLE_IN_SESSION context function to support mapping table lookups. Policy administrators can now write policy conditions to call mapping table lookups using this function to determine if a user’s active primary or secondary roles in a session inherit the privileges granted to the role specified in the context function argument.With this enhancement, Snowflake encourages customers to use the IS_ROLE_IN_SESSION context function in the policy conditions that require checking enabled roles in the role hierarchy of the current role.

For more information, see Understanding Column-level Security and Understanding row access policies.

Masking Policies: Query a View for Policy Conditions Regarding NULL, Length, Scale, and Precision Return Values¶

In this release, Snowflake provides a new, interim view in the shared SNOWFLAKE database named:

SNOWFLAKE.BCR_ROLLOUT.BCR_2022_07_DDM_ROLLOUT

The view helps identify where the conditions of a masking policy do not match the length, scale, or precision of a column or when the policy conditions return a NULL value for a NOT NULL column. The view returns records beginning on July 15, 2022.

This view is provided to help determine how the 2022_07 behavior change bundle will affect queries on columns that are protected by a masking policy and to provide enough time to begin the process of updating masking policy conditions.

For details, see Mitigating Masking Policy Return Value Updates.