Snowflake provides support for the majority of fundamental SQL data types, with specific restrictions, across various SQL constructs including columns, local variables, expressions, and parameters.
INTERVAL data type is not supported in Snowflake. VARCHAR is used instead. For more information, please refer to the INTERVAL data type documentation. With the --UseIntervalDatatypepreview flag, maps to INTERVAL DAY TO SECOND. See Interval Data Types.
Snowflake does not support the BIGNUMERIC data type. Use NUMERIC instead. BIGNUMERIC’s precision 76,76 exceeds Snowflake’s limit (38), resulting in truncation or rounding, which can introduce significant inaccuracies.
Snowflake does not support the BIGDECIMAL data type. Use NUMERIC instead. BIGDECIMAL’s precision 76,76 exceeds Snowflake’s limit (38), resulting in truncation or rounding, which can introduce significant inaccuracies.
The following is an extract of information about the usage of ANY TYPE within CREATE FUNCTION statements.
A parameter with a type equal to ANY TYPE can match more than one argument type when the function is called.
If more than one parameter has type ANY TYPE, then BigQuery doesn’t enforce any type relationship between these arguments.
The function return type cannot be ANY TYPE. It must be either omitted, which means to be automatically determined based on sql_expression, or an explicit type.
Passing the function arguments of types that are incompatible with the function definition results in an error at call time.
In BigQuery, an array is an ordered list of zero or more elements of non-array values. Elements in an array must share the same type. (Array Type. BigQuery)
SELECT--** SSC-FDM-BQ0001 - ACCESSING ARRAYS PRODUCES NULL INSTEAD OF AN ERROR FOR POSITIVE OUT OF BOUNDS INDEXES IN SNOWFLAKE **
col1[0]+4AS byIndex,--** SSC-FDM-BQ0001 - ACCESSING ARRAYS PRODUCES NULL INSTEAD OF AN ERROR FOR POSITIVE OUT OF BOUNDS INDEXES IN SNOWFLAKE **
col1[0]+4AS byOffset,--** SSC-FDM-BQ0001 - ACCESSING ARRAYS PRODUCES NULL INSTEAD OF AN ERROR FOR POSITIVE OUT OF BOUNDS INDEXES IN SNOWFLAKE **
col1[1-1]+4AS byOrdinal
FROM
test.arrayTable
ORDERBY--** SSC-FDM-BQ0001 - ACCESSING ARRAYS PRODUCES NULL INSTEAD OF AN ERROR FOR POSITIVE OUT OF BOUNDS INDEXES IN SNOWFLAKE **
col1[0];
SELECTPUBLIC.SAFE_OFFSET_UDF(col1,0)AS byOffsset,PUBLIC.SAFE_OFFSET_UDF(col1,-4)AS byOffsetUnderflow,PUBLIC.SAFE_OFFSET_UDF(col1,500)AS byOffsetOverflow,PUBLIC.SAFE_OFFSET_UDF(col1,1-1)AS byOrdinal,PUBLIC.SAFE_OFFSET_UDF(col1,-4-1)AS byOrdinalUnderflow,PUBLIC.SAFE_OFFSET_UDF(col1,500-1)AS byOrdinalOverflow
FROM test.arrayTable ORDERBY--** SSC-FDM-BQ0001 - ACCESSING ARRAYS PRODUCES NULL INSTEAD OF AN ERROR FOR POSITIVE OUT OF BOUNDS INDEXES IN SNOWFLAKE **
col1[0];
INSERTINTO test.arrayTable SELECT[4,10];INSERTINTO test.arrayTable (COL1)SELECT[1,2,3]UNIONALLSELECT[4,5,6];SELECT col1 FROM
test.arrayTable
ORDERBY--** SSC-FDM-BQ0001 - ACCESSING ARRAYS PRODUCES NULL INSTEAD OF AN ERROR FOR POSITIVE OUT OF BOUNDS INDEXES IN SNOWFLAKE **
col1[0],--** SSC-FDM-BQ0001 - ACCESSING ARRAYS PRODUCES NULL INSTEAD OF AN ERROR FOR POSITIVE OUT OF BOUNDS INDEXES IN SNOWFLAKE **
col1[1];
MERGEINTO test.anotherArrayTable
USING test.arrayTable
ON col1[0]= col2[0]WHENMATCHEDTHENUPDATESET col2 = col1
WHENNOTMATCHEDTHENINSERTVALUES([100,100,100])!!!RESOLVE EWI!!!/*** SSC-EWI-0073 - PENDING FUNCTIONAL EQUIVALENCE REVIEW FOR 'MergeStatement' NODE ***/!!!;SELECT col2 FROM
test.anotherArrayTable
ORDERBY--** SSC-FDM-BQ0001 - ACCESSING ARRAYS PRODUCES NULL INSTEAD OF AN ERROR FOR POSITIVE OUT OF BOUNDS INDEXES IN SNOWFLAKE **
col2[0];
1. Non-safe ARRAY access will not fail for positive out of bounds indexes
In BigQuery, accessing an array element by index will fail for any index value that is too low (underflow) or too high (overflow) when not using SAFE_OFFSET or SAFE_ORDINAL. However, in Snowflake errors are thrown only for underflow cases, any index that would case an overflow error will generate a NULL value instead.
When non-safe access to elements in an array is detected SnowConvert AI will generate SSC-FDM-BQ0001 to warn the user about this.
Sequence of bytes with a maximum of L bytes allowed in the binary string. The maximum length is 8 MB (8,388,608 bytes). For more information please refer to BigQuery BYTES data type.
Note
BYTES data type is not supported in Snowflake, currently transformed to BINARY.
The default output format for binary data types in BigQuery is ‘BASE64’ and in Snowflake ‘HEX’. For this reason, when a binary column is selected, the BASE64_ENCODE function is automatically added. In order to maintain the default formatting of BigQuery.
In case it is not added automatically and you want to see the data in BASE64 format, you can use the BASE64_ENCODE function or set the BINARY_OUTPUT_FORMAT format.
The following cases represent the forms that can be used to format byte literals in BigQuery.
B"abc"
B'''abc'''
b"""abc"""
These literals are not supported in Snowflake, but instead the TRY_TO_BINARY function can be used to convert the input expression to a binary value. This function is a special version of TO_BINARY that performs the same operation, but with error handling support.
It is important to take into consideration that the binary format for the conversion can be: HEX, BASE64, or UTF-8. The default is the value of the BINARY_INPUT_FORMAT session parameter. If this parameter is not set, the default value is HEX.
Please keep in mind that the default output format for binary data types in BigQuery is ‘BASE64’ and in Snowflake ‘HEX’. You can use the BASE64_ENCODE function or set the BINARY_OUTPUT_FORMAT format if you want to view the data in BASE64 format.
The only formats supported by Snowflake are: HEX, BASE64, or UTF-8. For more information, please refer toBinary Input and Output in Snowflake.
Binary functions used to insert data into a values clause are not supported in Snowflake.
A collection of points, linestrings, and polygons, which is represented as a point set, or a subset of the surface of the Earth. For more information please refer to BigQuery GEOGRAPHY data type.
The default output format for geography data types in BigQuery is WKT(Well-Known Text) and in Snowflake WKB (Well-Known Binary). For this reason, when geography columns are selected, the ST_ASWKT function is automatically added. In addition, when all the columns of a table are selected and it contains a Geography column, the GEOGRAPHY_OUTPUT_FORMAT is set to WKT. This is in order to keep the default BigQuery format.
CREATEORREPLACETABLE test.geographyType
(
COL1 GEOGRAPHY);INSERTINTO test.geographyType
VALUES(--** SSC-FDM-BQ0010 - THE FUNCTION 'ST_GEOGFROMTEXT' IS NOT REQUIRED IN SNOWFLAKE. **'POINT(-122.35 37.55)'),(--** SSC-FDM-BQ0010 - THE FUNCTION 'ST_GEOGFROMTEXT' IS NOT REQUIRED IN SNOWFLAKE. **'LINESTRING(-124.20 42.00, -120.01 41.99)');SELECTST_ASWKT( COL1)FROM test.geographyType;ALTERSESSIONSETGEOGRAPHY_OUTPUT_FORMAT='WKT';SELECT*FROM test.geographyType;
In case it is not added automatically and you want to see the data in WKT format, you can use the ST_ASWKT function or set the GEOGRAPHY_OUTPUT_FORMAT format.
Please keep in mind that the default output format for geography data types is WKT(Well-Known Text) and in Snowflake WKB (Well-Known Binary). You can use the ST_ASWKT function or set the GEOGRAPHY_OUTPUT_FORMAT format if you want to view the data in WKT format.
Geography functions used to insert data into a values clause are not needed in Snowflake.
An INTERVAL object represents duration or amount of time, without referring to any specific point in time. By default, it is transformed to VARCHAR because Snowflake historically did not support a stored INTERVAL type (BigQuery Language Reference INTERVAL Data Type).
Note
Preview Feature: When the --UseIntervalDatatypepreview flag is enabled, BigQuery INTERVAL columns are preserved as native Snowflake INTERVAL DAY TO SECOND types instead of being converted to VARCHAR. Interval literals and expressions are also normalized to Snowflake-compatible syntax. See the Interval Data Types translation reference for complete transformation details.
Syntax
INTERVAL int64_expression datetime_part
INTERVAL datetime_parts_string starting_datetime_part TO ending_datetime_part
SELECT!!!RESOLVE EWI!!!/*** SSC-EWI-0107 - INTERVAL LITERAL IS NOT SUPPORTED BY SNOWFLAKE IN THIS SCENARIO ***/!!!INTERVAL1YEAR;SELECTCURRENT_DATE()+INTERVAL'1 year',CURRENT_DATE()+INTERVAL'1 quarter',CURRENT_DATE()+INTERVAL'1 month',CURRENT_DATE()+INTERVAL'1 week',CURRENT_DATE()+INTERVAL'1 day',CURRENT_DATE()+INTERVAL'1 hour',CURRENT_DATE()+INTERVAL'1 minute',CURRENT_DATE()+INTERVAL'1 second';
Snowflake does not support the scenario where the Interval data type is queried directly, on the contrary when it is used as an operator for a given date its translation is done using an Interval constant (if possible).
The Interval value is transformed to a supported Snowflake format and then inserted as text inside the column. Since Snowflake does not support Interval as a data type, it is only supported in arithmetic operations. In order to use the value, it needs to be extracted and used as an Interval constant (if possible).
As is known, Snowflake only supports Interval as a data type in arithmetic operations, which is why the CURRENT_TIMESTAMP function is added to each operand to correctly support the comparison.
Snowflake Intervals have several limitations. Only arithmetic operations between DATE or TIMESTAMP and Interval Constants are supported, every other scenario is not supported.
In BigQuery, when the substring corresponding to the year-month is preceded by a sign (+ -), it affects both the year and the month. In a similar way, it works for the substring corresponding to the time, in this case, the following affects the hour, minute, and second. An example of this is shown below.
These literals are not supported in Snowflake, but instead the PARSE_JSON function can be used to convert the input expression to a json type. The only point to take into consideration is that this function cannot be used in the values clause in Snowflake, for this reason it is transformed to a subquery.
In BigQuery, a container of ordered fields each with a type (required) and field name (optional). See Struct Type.
In Snowflake, OBJECT_CONSTRUCT can be used to emulate the STRUCT behavior, and SnowConvert AI handles most implementation differences.
Note
Arguments that represent keys within the OBJECT_CONSTRUCT must be the original names of the target STRUCT. Any name specified within a STRUCT expression body will be replaced with the name found in the target STRUCT. Most of the data pattern examples below contain an example of a name that is replaced by the target name.
CREATEORREPLACETABLE test.structTypes
(
COL1 VARIANT/*** SSC-FDM-BQ0009 - STRUCT<INT> CONVERTED TO VARIANT. SOME OF ITS USAGES MIGHT HAVE FUNCTIONAL DIFFERENCES. ***/,
COL2 VARIANT/*** SSC-FDM-BQ0009 - STRUCT<STRING(10)> CONVERTED TO VARIANT. SOME OF ITS USAGES MIGHT HAVE FUNCTIONAL DIFFERENCES. ***/,
COL3 VARIANT/*** SSC-FDM-BQ0009 - STRUCT<STRUCT<INT64, INT64>> CONVERTED TO VARIANT. SOME OF ITS USAGES MIGHT HAVE FUNCTIONAL DIFFERENCES. ***/,
COL4 VARIANT/*** SSC-FDM-BQ0009 - STRUCT<> CONVERTED TO VARIANT. SOME OF ITS USAGES MIGHT HAVE FUNCTIONAL DIFFERENCES. ***/,
COL5 VARIANT/*** SSC-FDM-BQ0009 - STRUCT<INT, INT> CONVERTED TO VARIANT. SOME OF ITS USAGES MIGHT HAVE FUNCTIONAL DIFFERENCES. ***/,
COL7 VARIANT/*** SSC-FDM-BQ0009 - STRUCT<INT, BOOLEAN> CONVERTED TO VARIANT. SOME OF ITS USAGES MIGHT HAVE FUNCTIONAL DIFFERENCES. ***/,
COL8 VARIANT/*** SSC-FDM-BQ0009 - STRUCT<INT, BOOLEAN> CONVERTED TO VARIANT. SOME OF ITS USAGES MIGHT HAVE FUNCTIONAL DIFFERENCES. ***/);CREATEORREPLACETABLE test.tuple_sample (
COL1 VARIANT/*** SSC-FDM-BQ0009 - STRUCT<INT, INT> CONVERTED TO VARIANT. SOME OF ITS USAGES MIGHT HAVE FUNCTIONAL DIFFERENCES. ***/);
INSERTINTO test.tuple_sample
!!!RESOLVE EWI!!!/*** SSC-EWI-BQ0012 - SNOWCONVERT AI WAS UNABLE TO GENERATE A CORRECT OBJECT_CONSTRUCT PARAMETER. MISSING SYMBOL INFORMATION. ***/!!!VALUES((12,34)),((56,78)),((9,99)),((12,35));
BigQuery comparison operations for Structs compare value to value, ignoring the key if it exists, while Snowflake comparison operations for Objects compare both, value and key. This may cause that some comparisons return a different result.
SELECT*FROM
test.structTypes
--** SSC-FDM-BQ0008 - WHERE CLAUSE REFERENCES A COLUMN OF STRUCT TYPE. COMPARISON OPERATIONS MAY PRODUCE DIFFERENT RESULTS IN SNOWFLAKE. **WHERE COL1 NOTIN(COL2);SELECT*FROM
test.structTypes
--** SSC-FDM-BQ0008 - WHERE CLAUSE REFERENCES A COLUMN OF STRUCT TYPE. COMPARISON OPERATIONS MAY PRODUCE DIFFERENT RESULTS IN SNOWFLAKE. **WHERE COL1 <>(COL2);SELECT*FROM
test.structTypes
--** SSC-FDM-BQ0008 - WHERE CLAUSE REFERENCES A COLUMN OF STRUCT TYPE. COMPARISON OPERATIONS MAY PRODUCE DIFFERENT RESULTS IN SNOWFLAKE. **WHERE COL1 !=(COL2);
A timestamp value represents an absolute point in time, independent of any time zone or convention such as daylight saving time (DST), with microsecond precision. For more information please refer to BigQuery Timestamp data type.
When the time zone is defined you need to use the CONVERT_TIMEZONE function to store the data in Coordinated Universal Time (UTC). Also the timezone name inside the timestamp literal is not supported by Snowflake, in that case it is necessary to use this function as well.