New maximum size limits for database objects
For both existing tables and for tables created after the change, the default length for columns of type VARIANT, ARRAY, and OBJECT is 128 MB, and the default length for columns of type GEOGRAPHY and GEOMETRY is 64 MB.
MAX
Returns the maximum value for the records within expr. NULL values are ignored unless all the records are NULL, in which case a NULL value is returned.
MIN and MAX functions only use metadata-based results for some data types
CAUSE: This behavior is by design. SOLUTION: Metadata - based folding is not applied for data types such as VARCHAR, BINARY, and FLOAT (DOUBLE).
SQL data types: Changes to maximum length, output, and error messages
You can now read and process objects up to 128 MB in size.
A query performing an aggregation on a secure view is not resolved using metadata
ISSUE: A query involving an aggregation function like MAX() does not utilize metadata for resolution. Instead, the query is executed and observed to take more time than expected.
Your account has been locked. To continue using Snowflake, please create a case with Snowflake Support or reach us on our support line.
Your account has been locked. To continue using Snowflake, please create a case with Snowflake Support or reach us on our support line.
MAX (system data metric function)
Returns the maximum value for the specified column in a table. The MAX system data metric function is optimized to calculate the maximum value for a single column and provides greater performance when compared to calling the MAX function.
Controlling network traffic with network policies
By default, Snowflake allows users to connect to the service and internal stage from any computer or device. A security administrator (or higher) can use a network policy to allow or deny access to a request based on its origin.
SYSTEM$UNBLOCK_INTERNAL_STAGES_PUBLIC_ACCESS
Allows traffic from public IP addresses to access the internal stage of the current Snowflake account on Microsoft Azure. This function reverses the Azure settings on the internal stage’s Azure storage account that were made…
JDBC Driver Failure - net.snowflake.client.jdbc.internal.io.netty.util.internal.OutOfDirectMemoryError: failed to allocate xxx byte(s) of direct memory (used: yyy, max: zzz)
The issue described here occurs when the Apache Arrow library attempts to create a direct buffer off - heap of size 16777216 bytes (~16 MBs). This failed because the application process has already consumed the maximum direct memory size…
Source