SQL data types: Changes to maximum length, output, and error messages¶
Attention
This behavior change is in the 2024_08 bundle.
For the current status of the bundle, refer to Bundle History.
With this behavior change, compiled SQL expressions and some error messages behave as follows:
- Before the change:
In compiled SQL expressions and error messages, Snowflake explicitly specified the length of the data type (for example,
VARCHAR(16777216)).When loading objects larger than 16 MB, an error related to parsing or processing a large string or file is returned (for example,
100069 (22P02): Error parsing JSON: document is too large, max size 16777216 bytes).
- After the change:
In compiled SQL expressions and error messages, Snowflake omits the length of the data type (for example,
VARCHAR).When loading objects larger than 16 MB, an error related to storing a large object is returned (for example,
100082 (22000): Max LOB size (16777216) exceeded, actual size of parsed column is <actual_size>).
In the past, an error occurred when you attempted to query an object larger than 16 MB (8 MB for BINARY, GEOMETRY, and GEOGRAPHY) on a stage. You can now read and process objects up to 128 MB in size.
Note
With the 9.17 release, you can now also store objects larger than 16 MB in a column. For more information, see Size limits for database objects.
The new size limit isn’t explicitly exposed in SQL query output or metadata. However, you can implicitly observe the new increased length by creating or reading objects of a larger size, but not storing them. Enabling this feature introduces the following behavior changes:
VARCHAR and BINARY types appear without length in the output of GET_DDL, SHOW, and DESCRIBE commands for column expressions, UDFs, and stored procedures.
For example,
VARCHARis shown instead ofVARCHAR(16777216). This change applies only to newly created objects where you haven’t explicitly specified the length in the DDL statement. The change doesn’t apply to existing objects.Some statements that failed before with a
maximum size exceeded(or similar) error will now succeed.Statements that only load or create, but never store or return, a large value will succeed now.
Some statements that before failed with a
maximum size exceeded(or similar) error will keep failing, however, with a different error code or message.The new error code and message are still related to exceeding the 16 MB limit, but the error can originate from a different part of the execution plan. For example,
cannot load valuemight change tocannot store valueorcannot output value.
The first change affects all customers. The second and third changes affect customers who try to load or generate objects larger than 16 MB.
Important
We strongly advise against creating logic that depends on error messages associated with objects larger than 16 MB. Instead, you can build logic that uses the BIT_LENGTH function to check the size of the value.
Changes in metadata¶
There are behavior changes that affect the following types of operations:
For these types of operations, there are changes in metadata in the results set.
Note
This list is not exhaustive.
Returning metadata for UDFs¶
For new user-defined functions (UDFs) that use VARCHAR or BINARY values as input or output, changes in the metadata for DDL statements related to UDFs affect the output returned when you call the GET_DDL function, run the DESCRIBE FUNCTION statement, or query the event table. The following example creates a UDF:
GET_DDL¶
The metadata returned from a GET_DDL function call changes in the following way:
- Metadata before the change:
- Metadata after the change:
DESCRIBE FUNCTION¶
The metadata returned for a DESCRIBE FUNCTION statement changes in the following way:
- Metadata before the change:
- Metadata after the change:
Event table¶
For new user-defined functions that return VARCHAR or BINARY values as output, the snow.executable.name attribute in
the RESOURCE_ATTRIBUTES column of the
event table changes as follows:
- Metadata before the change:
- Metadata after the change:
Returning metadata for tables with column expressions¶
For new tables that use VARCHAR or BINARY in column expressions, changes in the metadata for DDL statements related to these columns affect the output returned when you call the GET_DDL function.
The following example creates a table with column expression:
The metadata returned from a GET_DDL function call changes in the following way:
- Metadata before the change:
- Metadata after the change:
External tables¶
The following example creates an external table:
The metadata returned from a GET_DDL function call changes in the following way:
- Metadata before the change:
- Metadata after the change:
Returning metadata for SYSTEM$TYPEOF¶
The metadata returned for a call to the SYSTEM$TYPEOF function changes in the following way:
- Metadata before the change:
- Metadata after the change:
Returning metadata for SHOW COLUMNS¶
This change affects both existing and new tables. The metadata returned for a SHOW COLUMNS statement changes in the following way:
- Metadata before the change:
- Metadata after the change:
Changes in loading and processing objects larger than 16 MB¶
There are behavior changes that affect cases when you try to load or process objects larger than 16 MB using the following types of operations:
Note
This list is not exhaustive.
Loading data by scanning files on a stage¶
When you attempt to load data larger than 16 MB by scanning files on a stage, an error message is returned.
Loading a whole large object using COPY INTO <table_name> … FROM SELECT
Loading a whole large object using COPY INTO <table_name> … FROM <stage_or_location>
Loading a whole large object using CREATE TABLE AS SELECT¶
A different error message appears when you try to use a CREATE TABLE AS SELECT statement to load objects that are larger than 16 MB for VARCHAR, VARIANT, OBJECT, and ARRAY (or larger than 8 MB for BINARY, GEOMETRY, or GEOGRAPHY). The error depends on the type of the source. The same message change applies when an INSERT INTO SELECT statement is used for this scenario.
Loading a whole large object from a JSON source¶
The following example tries to load a whole object larger than 16 MB from a JSON source using CREATE TABLE AS SELECT:
- Error message before the change:
- Error message after the change:
Loading a whole large object from an XML source¶
The following example tries to load a whole object larger than 16 MB from an XML source using CREATE TABLE AS SELECT:
- Error message before the change:
- Error message after the change:
Loading a whole large object using COPY INTO <table_name> … FROM SELECT¶
A different error message appears when you try to use a COPY INTO <table_name> … FROM SELECT statement to load objects that are larger than 16 MB for VARCHAR, VARIANT, OBJECT, and ARRAY (or larger than 8 MB for BINARY, GEOMETRY, or GEOGRAPHY). The error depends on the type of the source.
Important
If you attempt to load data that contains objects larger than 16 MB using the COPY INTO command with ON_ERROR=CONTINUE
and rely on the error messages written in the error log, the change in the error message could cause problems in logic that
depends on the error message.
Loading a whole large object from a JSON source¶
The following example tries to load a whole object larger than 16 MB from a JSON source using COPY INTO <table_name> … FROM SELECT:
- Error message before the change:
- Error message after the change:
Loading large nested objects from a JSON source¶
The following example tries to load JSON data when accessing large nested objects:
- Error message before the change:
- Error message after the change:
Loading a whole large object from an XML source¶
The following example tries to load a whole object larger than 16 MB from an XML source using COPY INTO <table_name> … FROM SELECT:
- Error message before the change:
- Error message after the change:
Loading a whole large object using COPY INTO <table_name> … FROM <stage_or_location> ¶
A different error message appears when you try to use a COPY INTO <table_name> … FROM <stage_or_location> statement to load objects that are larger than 16 MB for VARCHAR, VARIANT, OBJECT, and ARRAY (or larger than 8 MB for BINARY, GEOMETRY, or GEOGRAPHY). The error depends on the type of the source.
If you use the COPY command with large objects, queries might fail even when the ON_ERROR parameter is
set to CONTINUE. For more information, see the usage notes for the COPY command.
Important
If you attempt to load data that contains objects larger than 16 MB using the COPY INTO command with
ON_ERROR=CONTINUE and rely on the error messages written in the error log, the change in the error
message could cause problems in logic that depends on the message.
Loading a whole large object from a JSON source¶
The following example tries to load a whole object larger than 16 MB from a JSON source using COPY INTO <table_name> … FROM <stage_or_location>:
- Error message before the change:
- Error message after the change:
Loading a whole large object from an XML source¶
The following example tries to load a whole object larger than 16 MB from an XML source using COPY INTO <table_name> … FROM <stage_or_location>:
- Error message before the change:
- Error message after the change:
Querying a whole large object from a source file¶
Because objects larger than 16 MB currently are not allowed in a result set, a different error message appears when you try to query objects from a source file that are larger than 16 MB for VARCHAR, VARIANT, OBJECT, and ARRAY (or larger than 8 MB for BINARY, GEOMETRY, or GEOGRAPHY). The error depends on the type of the source.
Querying a whole large object from a JSON source¶
The following example tries to query a whole object larger than 16 MB from a JSON source:
- Error message before the change:
- Error message after the change:
Querying a whole large object from an XML source¶
The following example tries to query a whole object larger than 16 MB from an XML source:
- Error message before the change:
- Error message after the change:
Querying a whole large object from a CSV source¶
The following example tries to query a whole object larger than 16 MB from a CSV source:
- Error message before the change:
- Error message after the change:
Querying a whole large object from a Parquet source¶
The following example tries to query a whole object larger than 16 MB from a Parquet source:
- Error message before the change:
- Error message after the change:
Including large objects in query results¶
You can now create objects larger than 16 MB in memory. However, you cannot include these objects in query results or store them in a table. When you attempt to do so, an error message is returned.
Attempting to include an object larger than 16 MB in query results¶
The following query attempts to concatenate two large strings:
- Error message before the change:
- Error message after the change:
Attempting to store an object larger than 16 MB in a table¶
The following CREATE TABLE AS SELECT statement attempts to concatenate two large strings:
- Error message before the change:
- Error message after the change:
Creating a large object using aggregation¶
When you try to create an object larger than 16 MB and return output for it, an error message is returned.
The following example uses the ARRAY_AGG function in a query of a large object column:
- Error message before the change:
- Error message after the change:
Ref: 1779