Data types for Apache Iceberg™ tables¶
Snowflake supports most of the data types defined by the Apache Iceberg™ specification, and writes Iceberg data types to table files so that your Iceberg tables remain interoperable across different compute engines when you use Snowflake as the catalog.
For an overview of the Iceberg data types that Snowflake supports, see Supported data types.
Approximate types¶
If your table uses an Iceberg data type that Snowflake doesn’t support an exact match for, Snowflake uses an approximate Snowflake type. This type mapping affects column values for converted tables and Iceberg tables that use Snowflake as the catalog.
For example, consider a table with a column of Iceberg type int
. Snowflake processes the column values using the
Snowflake data type NUMBER(10,0).
NUMBER(10,0) has a range of (-9,999,999,999, +9,999,999,999),
but int
has a more limited range of (-2,147,483,648, +2,147,483,647). If you try to insert a value of 3,000,000,000 into that column,
Snowflake returns an out-of-range error message.
For details about approximate types, see the notes in the Supported data types table.
Supported data types¶
The tables in this section show the relationship between Iceberg data types and Snowflake data types. They use the following columns:
- Iceberg type:
The data type defined in the Apache Iceberg specification. When you use Snowflake as the catalog, Snowflake writes the Iceberg type to your table data files so that your tables remain interoperable across different compute engines.
- Snowflake type:
The Snowflake data type that is used to process and return table data. For example, if your schema specifies the Iceberg type
timestamp
, Snowflake processes and returns values using the Snowflake data type TIMESTAMP_NTZ(6) with microsecond precision.- Notes:
Additional usage notes, including notes for working with approximate types.
Numeric types¶
Snowflake as the catalog¶
The following table shows how Iceberg numeric data types map to Snowflake numeric data types for tables that use Snowflake as the Iceberg catalog (Snowflake-managed tables). When you create Snowflake-managed Iceberg table, you can use Iceberg data types to define numeric columns.
Iceberg data type |
Snowflake data type |
Notes |
---|---|---|
|
Inserting a 10-digit number smaller than the minimum or larger than the maximum 32-bit signed integer value results in an out-of-range error. |
|
|
Inserting a 19-digit number smaller than the minimum or larger than the maximum 64-bit signed integer value results in an out-of-range error. |
|
|
Synonymous with the Snowflake DOUBLE data type. Snowflake treats all floating-point numbers as double-precision 64-bit floating-point numbers, but writes Iceberg floats as 32-bit floating-point numbers in table data files. Narrowing conversions from 64 bits to 32 bits results in precision loss. You can’t use |
|
|
Synonymous with the Snowflake DOUBLE data type. Snowflake treats all floating-point numbers as double-precision 64-bit floating-point numbers. Narrowing conversions from 64 bits to 32 bits results in precision loss. You can’t use |
|
|
Specifying |
External catalog¶
When you create an Iceberg table that uses an external Iceberg catalog, Iceberg numeric types are mapped to Snowflake numeric types according to the following table.
Iceberg data type |
Snowflake data type |
---|---|
|
|
|
|
|
|
|
|
|
Note
You can’t use float
or double
as primary keys (in accordance with the
Apache Iceberg spec).
Other data types¶
Iceberg data type |
Snowflake data type |
Notes |
---|---|---|
|
||
|
||
|
Microsecond precision per the Apache Iceberg table specification. |
|
|
TIMESTAMP_NTZ(6) or TIMESTAMP_LTZ(6), depending on the value of the Snowflake parameter TIMESTAMP_TYPE_MAPPING. |
Microsecond precision per the Apache Iceberg table specification. You can also use the Parquet physical type |
|
Microsecond precision per the Apache Iceberg table specification. You can also use the Parquet physical type |
|
|
||
|
The When you use an external catalog or create a table from files in object storage, Snowflake maps the |
|
|
Structured type columns support a maximum of 1000 sub-columns. |
|
|
Structured type columns support a maximum of 1000 sub-columns. |
|
|
Structured type columns support a maximum of 1000 sub-columns. |
Delta data types¶
The following table shows how Delta data types map to Snowflake data types for Iceberg tables created from Delta table files.
Delta type |
Snowflake data type |
---|---|
BYTE |
NUMBER(3,0) |
SHORT |
NUMBER(5,0) |
INTEGER |
NUMBER(10,0) |
LONG |
NUMBER(20,0) |
FLOAT |
REAL |
DOUBLE |
REAL |
TIMESTAMP |
TIMESTAMP_LTZ(6) |
TIMESTAMP_NTZ |
TIMESTAMP_NTZ(6) |
BINARY |
BINARY |
STRING |
TEXT |
BOOLEAN |
BOOLEAN |
DECIMAL(P,S) |
NUMBER(P,S) |
The following table shows how Delta nested data types map to Snowflake data types.
Delta nested type |
Snowflake data type |
---|---|
STRUCT |
|
ARRAY |
|
MAP |
Considerations¶
Consider the following items when you work with data types for Iceberg tables:
Converting a table with columns that use the following Iceberg data types is not supported:
uuid
fixed(L)
For tables that use Snowflake as the catalog, creating a table that uses the Iceberg
uuid
data type is not supported.For all Iceberg table types:
Structured type columns support a maximum of 1000 sub-columns.
Iceberg supports microsecond precision for time and timestamp types. As a result, you can’t create an Iceberg table in Snowflake that uses another precision like millisecond or nanosecond.
You can’t use
float
ordouble
as primary keys (in accordance with the Apache Iceberg spec).
For tables created from Delta files:
Parquet files (data files for Delta tables) that use any of the following features or data types aren’t supported:
Field IDs.
The INTERVAL data type.
The DECIMAL data type with precision higher than 38.
LIST or MAP types with one-level or two-level representation.
Unsigned integer types (INT(signed = false)).
The FLOAT16 data type.