ALTER TABLE¶
Modifies the properties, columns, or constraints for an existing table.
- See also:
ALTER TABLE … ALTER COLUMN , CREATE TABLE , DROP TABLE , SHOW TABLES , DESCRIBE TABLE
Syntax¶
ALTER TABLE [ IF EXISTS ] <name> RENAME TO <new_table_name>
ALTER TABLE [ IF EXISTS ] <name> SWAP WITH <target_table_name>
ALTER TABLE [ IF EXISTS ] <name> { clusteringAction | tableColumnAction | constraintAction }
ALTER TABLE [ IF EXISTS ] <name> extTableColumnAction
ALTER TABLE [ IF EXISTS ] <name> searchOptimizationAction
ALTER TABLE [ IF EXISTS ] <name> SET
[ STAGE_FILE_FORMAT = ( { FORMAT_NAME = '<file_format_name>' | TYPE = { CSV | JSON | AVRO | ORC | PARQUET | XML } [ formatTypeOptions ] } ) ]
[ STAGE_COPY_OPTIONS = ( copyOptions ) ]
[ DATA_RETENTION_TIME_IN_DAYS = <integer> ]
[ MAX_DATA_EXTENSION_TIME_IN_DAYS = <integer> ]
[ CHANGE_TRACKING = { TRUE | FALSE } ]
[ DEFAULT_DDL_COLLATION = '<collation_specification>' ]
[ ENABLE_SCHEMA_EVOLUTION = { TRUE | FALSE } ]
[ COMMENT = '<string_literal>' ]
ALTER TABLE [ IF EXISTS ] <name> SET TAG <tag_name> = '<tag_value>' [ , <tag_name> = '<tag_value>' ... ]
ALTER TABLE [ IF EXISTS ] <name> UNSET TAG <tag_name> [ , <tag_name> ... ]
ALTER TABLE [ IF EXISTS ] <name> UNSET {
DATA_RETENTION_TIME_IN_DAYS |
MAX_DATA_EXTENSION_TIME_IN_DAYS |
CHANGE_TRACKING |
DEFAULT_DDL_COLLATION |
ENABLE_SCHEMA_EVOLUTION |
COMMENT |
}
[ , ... ]
ALTER TABLE [ IF EXISTS ] <name>
ADD ROW ACCESS POLICY <policy_name> ON (<col_name> [ , ... ])
ALTER TABLE [ IF EXISTS ] <name>
DROP ROW ACCESS POLICY <policy_name>
ALTER TABLE [ IF EXISTS ] <name>
DROP ROW ACCESS POLICY <policy_name>,
ADD ROW ACCESS POLICY <policy_name> ON (<col_name> [ , ... ])
ALTER TABLE [ IF EXISTS ] <name> DROP ALL ROW ACCESS POLICIES
Where:
clusteringAction ::= { CLUSTER BY ( <expr> [ , <expr> , ... ] ) /* RECLUSTER is deprecated */ | RECLUSTER [ MAX_SIZE = <budget_in_bytes> ] [ WHERE <condition> ] /* { SUSPEND | RESUME } RECLUSTER is valid action */ | { SUSPEND | RESUME } RECLUSTER | DROP CLUSTERING KEY }tableColumnAction ::= { ADD [ COLUMN ] [ IF NOT EXISTS ] <col_name> <col_type> [ { DEFAULT <expr> | { AUTOINCREMENT | IDENTITY } /* AUTOINCREMENT (or IDENTITY) is supported only for */ /* columns with numeric data types (NUMBER, INT, FLOAT, etc.). */ /* Also, if the table is not empty (i.e. if the table contains */ /* any rows), only DEFAULT can be altered. */ [ { ( <start_num> , <step_num> ) | START <num> INCREMENT <num> } ] [ { ORDER | NOORDER } ] } ] [ inlineConstraint ] [ [ WITH ] MASKING POLICY <policy_name> [ USING ( <col1_name> , cond_col_1 , ... ) ] ] [ [ WITH ] TAG ( <tag_name> = '<tag_value>' [ , <tag_name> = '<tag_value>' , ... ] ) ] | RENAME COLUMN <col_name> TO <new_col_name> | ALTER | MODIFY [ ( ] [ COLUMN ] <col1_name> DROP DEFAULT , [ COLUMN ] <col1_name> SET DEFAULT <seq_name>.NEXTVAL , [ COLUMN ] <col1_name> { [ SET ] NOT NULL | DROP NOT NULL } , [ COLUMN ] <col1_name> [ [ SET DATA ] TYPE ] <type> , [ COLUMN ] <col1_name> COMMENT '<string>' , [ COLUMN ] <col1_name> UNSET COMMENT [ , [ COLUMN ] <col2_name> ... ] [ , ... ] [ ) ] | ALTER | MODIFY COLUMN <col1_name> SET MASKING POLICY <policy_name> [ USING ( <col1_name> , cond_col_1 , ... ) ] [ FORCE ] | ALTER | MODIFY COLUMN <col1_name> UNSET MASKING POLICY | ALTER | MODIFY [ COLUMN ] <col1_name> SET TAG <tag_name> = '<tag_value>' [ , <tag_name> = '<tag_value>' ... ] , [ COLUMN ] <col2_name> SET TAG <tag_name> = '<tag_value>' [ , <tag_name> = '<tag_value>' ... ] ... | ALTER | MODIFY COLUMN <col1_name> UNSET TAG <tag_name> [ , <tag_name> ... ] , COLUMN <col2_name> UNSET TAG <tag_name> [ , <tag_name> ... ] ... | DROP [ COLUMN ] [ IF EXISTS ] <col1_name> [, <col2_name> ... ] } inlineConstraint ::= [ NOT NULL ] [ CONSTRAINT <constraint_name> ] { UNIQUE | PRIMARY KEY | { [ FOREIGN KEY ] REFERENCES <ref_table_name> [ ( <ref_col_name> ) ] } } [ <constraint_properties> ]For detailed syntax and examples for altering columns, see ALTER TABLE … ALTER COLUMN. .
For detailed syntax and examples for creating/altering inline constraints, see CREATE | ALTER TABLE … CONSTRAINT.
extTableColumnAction ::= { ADD [ COLUMN ] [ IF NOT EXISTS ] <col_name> <col_type> AS ( <expr> ) | RENAME COLUMN <col_name> TO <new_col_name> | DROP [ COLUMN ] [ IF EXISTS ] <col1_name> [, <col2_name> ... ] }constraintAction ::= { ADD outoflineConstraint | RENAME CONSTRAINT <constraint_name> TO <new_constraint_name> | { ALTER | MODIFY } { CONSTRAINT <constraint_name> | PRIMARY KEY | UNIQUE | FOREIGN KEY } ( <col_name> [ , ... ] ) [ [ NOT ] ENFORCED ] [ VALIDATE | NOVALIDATE ] [ RELY | NORELY ] | DROP { CONSTRAINT <constraint_name> | PRIMARY KEY | UNIQUE | FOREIGN KEY } ( <col_name> [ , ... ] ) [ CASCADE | RESTRICT ] } outoflineConstraint ::= [ CONSTRAINT <constraint_name> ] { UNIQUE [ ( <col_name> [ , <col_name> , ... ] ) ] | PRIMARY KEY [ ( <col_name> [ , <col_name> , ... ] ) ] | [ FOREIGN KEY ] [ ( <col_name> [ , <col_name> , ... ] ) ] REFERENCES <ref_table_name> [ ( <ref_col_name> [ , <ref_col_name> , ... ] ) ] } [ <constraint_properties> ]For detailed syntax and examples for creating/altering out-of-line constraints, see CREATE | ALTER TABLE … CONSTRAINT.
searchOptimizationAction ::= { ADD SEARCH OPTIMIZATION [ ON <search_method_with_target> [ , <search_method_with_target> ... ] ] | DROP SEARCH OPTIMIZATION [ ON { <search_method_with_target> | <column_name> | <expression_id> } [ , ... ] ] }For details, see Search Optimization Actions (searchOptimizationAction).
formatTypeOptions ::= -- If TYPE = CSV COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE RECORD_DELIMITER = '<character>' | NONE FIELD_DELIMITER = '<character>' | NONE FILE_EXTENSION = '<string>' PARSE_HEADER = TRUE | FALSE SKIP_HEADER = <integer> SKIP_BLANK_LINES = TRUE | FALSE DATE_FORMAT = '<string>' | AUTO TIME_FORMAT = '<string>' | AUTO TIMESTAMP_FORMAT = '<string>' | AUTO BINARY_FORMAT = HEX | BASE64 | UTF8 ESCAPE = '<character>' | NONE ESCAPE_UNENCLOSED_FIELD = '<character>' | NONE TRIM_SPACE = TRUE | FALSE FIELD_OPTIONALLY_ENCLOSED_BY = '<character>' | NONE NULL_IF = ( '<string>' [ , '<string>' ... ] ) ERROR_ON_COLUMN_COUNT_MISMATCH = TRUE | FALSE REPLACE_INVALID_CHARACTERS = TRUE | FALSE EMPTY_FIELD_AS_NULL = TRUE | FALSE SKIP_BYTE_ORDER_MARK = TRUE | FALSE ENCODING = '<string>' | UTF8 -- If TYPE = JSON COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE DATE_FORMAT = '<string>' | AUTO TIME_FORMAT = '<string>' | AUTO TIMESTAMP_FORMAT = '<string>' | AUTO BINARY_FORMAT = HEX | BASE64 | UTF8 TRIM_SPACE = TRUE | FALSE NULL_IF = ( '<string>' [ , '<string>' ... ] ) FILE_EXTENSION = '<string>' ENABLE_OCTAL = TRUE | FALSE ALLOW_DUPLICATE = TRUE | FALSE STRIP_OUTER_ARRAY = TRUE | FALSE STRIP_NULL_VALUES = TRUE | FALSE REPLACE_INVALID_CHARACTERS = TRUE | FALSE IGNORE_UTF8_ERRORS = TRUE | FALSE SKIP_BYTE_ORDER_MARK = TRUE | FALSE -- If TYPE = AVRO COMPRESSION = AUTO | GZIP | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE TRIM_SPACE = TRUE | FALSE REPLACE_INVALID_CHARACTERS = TRUE | FALSE NULL_IF = ( '<string>' [ , '<string>' ... ] ) -- If TYPE = ORC TRIM_SPACE = TRUE | FALSE REPLACE_INVALID_CHARACTERS = TRUE | FALSE NULL_IF = ( '<string>' [ , '<string>' ... ] ) -- If TYPE = PARQUET COMPRESSION = AUTO | LZO | SNAPPY | NONE SNAPPY_COMPRESSION = TRUE | FALSE BINARY_AS_TEXT = TRUE | FALSE USE_LOGICAL_TYPE = TRUE | FALSE TRIM_SPACE = TRUE | FALSE REPLACE_INVALID_CHARACTERS = TRUE | FALSE NULL_IF = ( '<string>' [ , '<string>' ... ] ) -- If TYPE = XML COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE IGNORE_UTF8_ERRORS = TRUE | FALSE PRESERVE_SPACE = TRUE | FALSE STRIP_OUTER_ELEMENT = TRUE | FALSE DISABLE_SNOWFLAKE_DATA = TRUE | FALSE DISABLE_AUTO_CONVERT = TRUE | FALSE REPLACE_INVALID_CHARACTERS = TRUE | FALSE SKIP_BYTE_ORDER_MARK = TRUE | FALSEcopyOptions ::= ON_ERROR = { CONTINUE | SKIP_FILE | SKIP_FILE_<num> | 'SKIP_FILE_<num>%' | ABORT_STATEMENT } SIZE_LIMIT = <num> PURGE = TRUE | FALSE RETURN_FAILED_ONLY = TRUE | FALSE MATCH_BY_COLUMN_NAME = CASE_SENSITIVE | CASE_INSENSITIVE | NONE ENFORCE_LENGTH = TRUE | FALSE TRUNCATECOLUMNS = TRUE | FALSE FORCE = TRUE | FALSE
Parameters¶
name
Identifier for the table to alter. If the identifier contains spaces or special characters, the entire string must be enclosed in double quotes. Identifiers enclosed in double quotes are also case-sensitive.
RENAME TO new_table_name
Renames the specified table with a new identifier that is not currently used by any other tables in the schema.
For more details about table identifiers, see Identifier requirements.
You can move the object to a different database and/or schema while optionally renaming the object. To do so, specify a qualified
new_name
value that includes the new database and/or schema name in the formdb_name.schema_name.object_name
orschema_name.object_name
, respectively.Note
The destination database and/or schema must already exist. In addition, an object with the same name cannot already exist in the new location; otherwise, the statement returns an error.
Moving an object to a managed access schema is prohibited unless the object owner (i.e. the role that has the OWNERSHIP privilege on the object) also owns the target schema.
When an object (table, column, etc.) is renamed, other objects that reference it must be updated with the new name.
SWAP WITH target_table_name
Swap renames two tables in a single transaction.
Note that swapping a permanent or transient table with a temporary table, which persists only for the duration of the user session in which it was created, is not allowed. This restriction prevents a naming conflict that could occur when a temporary table is swapped with a permanent or transient table, and an existing permanent or transient table has the same name as the temporary table. To swap a permanent or transient table with a temporary table, use three
ALTER TABLE ... RENAME TO
statements: Rename tablea
toc
,b
toa
, and thenc
tob
.
Note
To rename a table or swap two tables, the role used to perform the operation must have OWNERSHIP privileges on the table(s). In addition, renaming a table requires the CREATE TABLE privilege on the schema for the table.
SET ...
Specifies one or more properties/parameters to set for the table (separated by blank spaces, commas, or new lines):
STAGE_FILE_FORMAT = ( FORMAT_NAME = 'file_format_name' )
or .STAGE_FILE_FORMAT = ( TYPE = CSV | JSON | AVRO | ORC | PARQUET | XML [ ... ] )
Modifies the default file format for the table (for data loading and unloading), which can be either:
FORMAT_NAME = file_format_name
Specifies an existing file format object to use for loading/unloading data. The specified file format object determines the format type (CSV, JSON, etc.) and other format options for data files.
Note that no additional format options are specified in the string. Instead, the named file format object defines the other file format options used for loading/unloading data. For more information, see CREATE FILE FORMAT.
TYPE = CSV | JSON | AVRO | ORC | PARQUET | XML [ ... ]
Specifies the type of files to load/unload. Additional format-specific options can be included in the string. For more details, see Format Type Options (in this topic).
Note
FORMAT_NAME
andTYPE
are mutually exclusive; you can only specify one or the other for a table.STAGE_COPY_OPTIONS = ( ... )
Modifies the copy options to use when loading data from files into the table. For more details, see Copy Options (in this topic).
DATA_RETENTION_TIME_IN_DAYS = integer
Object-level parameter that modifies the retention period for the table for Time Travel. For more details, see Understanding & Using Time Travel and Working with Temporary and Transient Tables.
For a detailed description of this parameter, as well as more information about object parameters, see Parameters.
Values:
Standard Edition:
0
or1
Enterprise Edition:
0
to90
for permanent tables0
or1
for temporary and transient tables
Note
A value of
0
effectively disables Time Travel for the table.MAX_DATA_EXTENSION_TIME_IN_DAYS = integer
Object parameter that specifies the maximum number of days for which Snowflake can extend the data retention period for the table to prevent streams on the table from becoming stale.
For a detailed description of this parameter, see MAX_DATA_EXTENSION_TIME_IN_DAYS.
CHANGE_TRACKING = TRUE | FALSE
Specifies to enable or disable change tracking on the table.
TRUE
enables change tracking on the table. This option adds several hidden columns to the source table and begins storing change tracking metadata in the columns. These columns consume a small amount of storage.The change tracking metadata can be queried using the CHANGES clause for SELECT statements, or by creating and querying one or more streams on the table.
FALSE
disables change tracking on the table. Associated hidden columns are dropped from the table.
DEFAULT_DDL_COLLATION = 'collation_specification'
Specifies a default collation specification for any new columns added to the table.
Setting the parameter does not change the collation specification for any existing columns.
For more details about the parameter, see DEFAULT_DDL_COLLATION.
TAG tag_name = 'tag_value' [ , tag_name = 'tag_value' , ... ]
Specifies the tag name and the tag string value.
The tag value is always a string, and the maximum number of characters for the tag value is 256.
For details about specifying tags in a statement, refer to Tag quotas for objects and columns.
ENABLE_SCHEMA_EVOLUTION = { TRUE | FALSE }
Enables or disables automatic changes to the table schema from data loaded into the table from source files, including:
Added columns.
By default, schema evolution is limited to a maximum of 10 added columns per load operation. To request more than 10 added columns per load operation, contact Snowflake Support.
The NOT NULL constraint can be dropped from any number of columns missing in new data files.
Setting it to
TRUE
enables automatic table schema evolution. The defaultFALSE
disables automatic table schema evolution.Note
Loading data from files evolves the table columns when all of the following are true:
The COPY INTO <table> statement includes the
MATCH_BY_COLUMN_NAME
option.The role used to load the data has the EVOLVE SCHEMA or OWNERSHIP privilege on the table.
Additionally, for schema evolution with CSV, when used with
MATCH_BY_COLUMN_NAME
andPARSE_HEADER
,ERROR_ON_COLUMN_COUNT_MISMATCH
must be set to false.COMMENT = 'string_literal'
Adds a comment or overwrites the existing comment for the table.
UNSET ...
Specifies one or more properties/parameters to unset for the table, which resets them back to their defaults:
DATA_RETENTION_TIME_IN_DAYS
MAX_DATA_EXTENSION_TIME_IN_DAYS
CHANGE_TRACKING
DEFAULT_DDL_COLLATION
ENABLE_SCHEMA_EVOLUTION
TAG tag_name [ , tag_name ... ]
COMMENT
Note
You cannot use UNSET
to reset the file format and copy options. To reset these options, you must use SET
.
policy_name
Identifier for the masking policy or row access policy; must be unique for your schema.
ADD ROW ACCESS POLICY policy_name ON (col_name [ , ... ])
Adds a row access policy to a table.
At least one column name must be specified. Additional columns can be specified with a comma separating each column name. Use this expression to add a row access policy to both a table and an external table.
DROP ROW ACCESS POLICY policy_name
Drops a row access policy from a table.
Use this expression to drop a policy from both a table and an external table.
DROP ALL ROW ACCESS POLICIES
Drops all row access policy associations from a table.
This expression is helpful when a row access policy is dropped from a schema before dropping the policy from a table. Use this expression to drop row access policy associations from both a table and an external table.
Clustering Actions (clusteringAction
)¶
CLUSTER BY ( expr [ , expr , ... ] )
Specifies (or modifies) one or more table columns or column expressions as the clustering key for the table. These are the columns/expressions for which clustering is maintained by Automatic Clustering.
Important
Clustering keys are not intended or recommended for all tables; they typically benefit very large (i.e. multi-terabyte) tables.
Before you specify a clustering key for a table, please see Understanding Snowflake Table Structures.
RECLUSTER ...
Deprecated
Performs manual, incremental reclustering of a table that has a clustering key defined:
MAX_SIZE = budget_in_bytes
Deprecated — use a larger warehouse to achieve more effective manual reclustering
Specifies the upper-limit on the amount of data (in bytes) in the table to recluster.
WHERE condition
Specifies a condition or range on which to recluster data in the table.
Note
Only roles with the OWNERSHIP or INSERT privilege on a table can recluster the table.
SUSPEND | RESUME RECLUSTER
Enables or disables Automatic Clustering for the table.
DROP CLUSTERING KEY
Drops the clustering key for the table.
For more information about clustering keys and reclustering, see Understanding Snowflake Table Structures.
Table Column Actions (tableColumnAction
)¶
ADD [ COLUMN ] [ IF NOT EXISTS ] col_name col_data_type
.[ DEFAULT | AUTOINCREMENT ... ]
.[ inlineConstraint ]
.[ [ WITH ] MASKING POLICY policy_name ]
.[ [ WITH ] TAG ( tag_name = 'tag_value' [ , tag_name = 'tag_value' , ... ] ) ] [ , ...]
Adds a new column. You can choose to specify a default value, inline constraint, masking policy, and/or one or more tags.
For additional details, see:
This operation can be performed on multiple columns in the same command.
If you are not sure if the column already exists, you can specify IF NOT EXISTS when adding the column. If the column already exists, ADD COLUMN has no effect on the existing column and does not result in an error.
Note
You cannot specify IF NOT EXISTS if you are also specifying any of the following for the new column:
DEFAULT, AUTOINCREMENT, or IDENTITY
UNIQUE, PRIMARY KEY, or FOREIGN KEY
RENAME COLUMN col_name to new_col_name
Renames the specified column to a new name that is not currently used for any other columns in the table.
You cannot rename a column that is part of a clustering key.
When an object (table, column, etc.) is renamed, other objects that reference it must be updated with the new name.
ALTER | MODIFY [ COLUMN ] ...
Modifies one or more properties for a column. This operation can be performed on multiple columns in the same command.
For detailed syntax and examples for altering columns, see ALTER TABLE … ALTER COLUMN.
USING ( col_name , cond_col_1 ... )
Specifies the arguments to pass into the conditional masking policy SQL expression.
The first column in the list specifies the column for the policy conditions to mask or tokenize the data and must match the column to which the masking policy is set.
The additional columns specify the columns to evaluate to determine whether to mask or tokenize the data in each row of the query result when a query is made on the first column.
If the USING clause is omitted, Snowflake treats the conditional masking policy as a normal masking policy.
FORCE
Replaces a masking policy that is currently set on a column with a different masking policy in a single statement.
Note that using the
FORCE
keyword requires the data type of the policy in the ALTER TABLE statement (i.e. STRING) to match the data type of the masking policy currently set on the column (i.e. STRING).If a masking policy is not currently set on the column, specifying this keyword has no effect.
For details, see: Replace a masking policy on a column.
DROP COLUMN [ IF EXISTS ] col_name [ CASCADE | RESTRICT ]
Removes the specified column from the table.
If you are not sure if the column already exists, you can specify IF EXISTS when dropping the column. If the column does not exist, DROP COLUMN has no effect and does not result in an error.
Dropping a column is a metadata-only operation. It does not immediately re-write the micro-partition(s) and therefore does not immediately free up the space used by the column. Typically, the space within an individual micro-partition is freed the next time that the micro-partition is re-written, which is typically when a write is done either due to DML (INSERT, UPDATE, DELETE) or re-clustering.
External Table Column Actions (extTableColumnAction
)¶
For all other external table modifications, see ALTER EXTERNAL TABLE.
ADD [ COLUMN ] [ IF NOT EXISTS ] <col_name> <col_type> AS ( <expr> ) [, ...]
Adds a new column to the external table.
If you are not sure if the column already exists, you can specify IF NOT EXISTS when adding the column. If the column already exists, ADD COLUMN has no effect on the existing column and does not result in an error.
This operation can be performed on multiple columns in the same command.
col_name
String that specifies the column identifier (i.e. name). All the requirements for table identifiers also apply to column identifiers.
For more details, see Identifier requirements.
col_type
String (constant) that specifies the data type for the column. The data type must match the result of
expr
for the column.For details about the data types that can be specified for table columns, see Data Types.
expr
String that specifies the expression for the column. When queried, the column returns results derived from this expression.
External table columns are virtual columns, which are defined using an explicit expression. Add virtual columns as expressions using the VALUE column and/or the METADATA$FILENAME pseudocolumn:
- VALUE
A VARIANT type column that represents a single row in the external file.
- CSV
The VALUE column structures each row as an object with elements identified by column position (i.e.
{c1: <column_1_value>, c2: <column_2_value>, c3: <column_1_value> ...}
).For example, add a VARCHAR column named
mycol
that references the first column in the staged CSV files:mycol varchar as (value:c1::varchar)
- Semi-structured data
Enclose element names and values in double-quotes. Traverse the path in the VALUE column using dot notation.
For example, suppose the following represents a single row of semi-structured data in a staged file:
{ "a":"1", "b": { "c":"2", "d":"3" } }
Add a VARCHAR column named
mycol
that references the nested repeatingc
element in the staged file:mycol varchar as (value:"b"."c"::varchar)
- METADATA$FILENAME
A pseudocolumn that identifies the name of each staged data file included in the external table, including its path in the stage.
RENAME COLUMN col_name to new_col_name
Renames the specified column to a new name that is not currently used for any other columns in the external table.
DROP COLUMN [ IF EXISTS ] col_name
Removes the specified column from the external table.
If you are not sure if the column already exists, you can specify IF EXISTS when dropping the column. If the column does not exist, DROP COLUMN has no effect and does not result in an error.
Constraint Actions (constraintAction
)¶
ADD CONSTRAINT
Adds an out-of-line integrity constraint to one or more columns in the table. To add an inline constraint (for a column), see Column Actions (in this topic).
RENAME CONSTRAINT constraint_name TO new_constraint_name
Renames the specified constraint.
ALTER | MODIFY CONSTRAINT ...
Alters the properties for the specified constraint.
DROP CONSTRAINT constraint_name | PRIMARY KEY | UNIQUE | FOREIGN KEY ( col_name [ , ... ] ) [ CASCADE | RESTRICT ]
Drops the specified constraint for the specified column or set of columns.
For detailed syntax and examples for adding or altering constraints, see CREATE | ALTER TABLE … CONSTRAINT.
Search Optimization Actions (searchOptimizationAction
)¶
ADD SEARCH OPTIMIZATION
Adds search optimization for the entire table or, if you specify the optional ON clause, for specific columns.
Note:
Search optimization can be expensive to maintain, especially if the data in the table changes frequently. For more information, see Search Optimization Cost Estimation and Management.
If you try to add search optimization on a materialized view, Snowflake returns an error message.
ON search_method_with_target [, search_method_with_target ... ]
Specifies that you want to configure search optimization for specific columns or VARIANT fields (rather than the entire table).
For
search_method_with_target
, use an expression with the following syntax:<search_method>(<target> [, ...])
Where:
search_method
specifies one of the following methods that optimizes queries for a particular type of predicate:Search Method
Description
EQUALITY
Equality and IN predicates.
SUBSTRING
Predicates that match substrings and regular expressions (e.g. LIKE, ILIKE, RLIKE, REGEXP_LIKE, etc.)
GEO
Predicates that use GEOGRAPHY types.
target
specifies the column, VARIANT field, or an asterisk (*).Depending on the value of
search_method
, you can specify a column or VARIANT field of one of the following types:Search Method
Supported Targets
EQUALITY
Columns of numerical, string, binary, and VARIANT data types, including paths to fields in VARIANTs.
To specify a VARIANT field, use dot or bracket notation (e.g.
my_column:my_field_name.my_nested_field_name
ormy_column['my_field_name']['my_nested_field_name']
). You may also use a colon-delimited path to the field (e.g.my_column:my_field_name:my_nested_field_name
).When you specify a VARIANT field, the configuration applies to all nested fields under that field. For example, if you specify
ON EQUALITY(src:a.b)
:This configuration can improve queries
on src:a.b
and on any nested fields (e.g.src:a.b.c
,src:a.b.c.d
, etc.).This configuration does not affect queries that do not use the
src:a.b
prefix (e.g.src:a
,src:z
, etc.).
SUBSTRING
Columns of string or VARIANT data types, including paths to fields in VARIANTs. Specify paths to fields as described above under
EQUALITY
; searches on nested fields are improved in the same way.GEO
Columns of the GEOGRAPHY data type.
To specify all applicable columns in the table as targets, use an asterisk (
*
).Note that you cannot specify both an asterisk and specific column names for a given search method. However, you can specify an asterisk in different search methods.
For example, you can specify the following expressions:
-- Allowed ON SUBSTRING(*) ON EQUALITY(*), SUBSTRING(*), GEO(*)
You cannot specify the following expressions:
-- Not allowed ON EQUALITY(*, c1) ON EQUALITY(c1, *) ON EQUALITY(v1:path, *) ON EQUALITY(c1), EQUALITY(*)
To specify more than one search method on a target, use a comma to separate each subsequent method and target:
ALTER TABLE t1 ADD SEARCH OPTIMIZATION ON EQUALITY(c1), EQUALITY(c2, c3);
If you run the ALTER TABLE … ADD SEARCH OPTIMIZATION ON … command multiple times on the same table, each subsequent command adds to the existing configuration for the table. For example, suppose that you run the following commands:
ALTER TABLE t1 ADD SEARCH OPTIMIZATION ON EQUALITY(c1, c2); ALTER TABLE t1 ADD SEARCH OPTIMIZATION ON EQUALITY(c3, c4);
This adds equality predicates for the columns c1, c2, c3, and c4 to the configuration for the table. This is equivalent to running the command:
ALTER TABLE t1 ADD SEARCH OPTIMIZATION ON EQUALITY(c1, c2, c3, c4);
For examples, see Enabling Search Optimization for Specific Columns.
DROP SEARCH OPTIMIZATION
Removes search optimization for the entire table or, if you specify the optional ON clause, from specific columns.
Note:
If a table has the search optimization property, then dropping the table and undropping it preserves the search optimization property.
Removing the search optimization property from a table and then adding it back incurs the same cost as adding it the first time.
ON search_method_with_target | column_name | expression_id [, ... ]
Specifies that you want to drop the search optimization configuration for specific columns or VARIANT fields (rather than dropping search optimization for the entire table).
To identify the column configuration to drop, specify one of the following:
For
search_method_with_target
, specify a method for optimizing queries for one or more specific targets, which can be columns or VARIANT fields. Use the syntax described earlier.For
column_name
, specify the name of the column configured for search optimization. Specifying the column name drops all expressions for that column, including expressions that use VARIANT fields in the column.For
expression_id
, specify the ID for an expression listed in the output of the DESCRIBE SEARCH OPTIMIZATION command.
To specify more than one of these, use a comma between items.
You can specify any combination of search methods with targets, column names, and expression IDs.
For examples, see Dropping Search Optimization for Specific Columns.
Format Type Options (formatTypeOptions
)¶
Format type options are used for loading data into and unloading data out of tables.
Depending on the file format type specified (STAGE_FILE_FORMAT = ( TYPE = ... )
), you can include one or more of the following
format-specific options (separated by blank spaces, commas, or new lines):
TYPE = CSV¶
COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE
- Use
Data loading, data unloading, and external tables
- Definition
When loading data, specifies the current compression algorithm for the data file. Snowflake uses this option to detect how an already-compressed data file was compressed so that the compressed data in the file can be extracted for loading.
When unloading data, compresses the data file using the specified compression algorithm.
- Values
Supported Values
Notes
AUTO
When loading data, compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. When unloading data, files are automatically compressed using the default, which is gzip.
GZIP
BZ2
BROTLI
Must be specified when loading/unloading Brotli-compressed files.
ZSTD
Zstandard v0.8 (and higher) is supported.
DEFLATE
Deflate-compressed files (with zlib header, RFC1950).
RAW_DEFLATE
Raw Deflate-compressed files (without header, RFC1951).
NONE
When loading data, indicates that the files have not been compressed. When unloading data, specifies that the unloaded files are not compressed.
- Default
AUTO
RECORD_DELIMITER = 'character' | NONE
- Use
Data loading, data unloading, and external tables
- Definition
One or more singlebyte or multibyte characters that separate records in an input file (data loading) or unloaded file (data unloading). Accepts common escape sequences or the following singlebyte or multibyte characters:
- Singlebyte characters
Octal values (prefixed by
\\
) or hex values (prefixed by0x
or\x
). For example, for records delimited by the circumflex accent (^
) character, specify the octal (\\136
) or hex (0x5e
) value.- Multibyte characters
Hex values (prefixed by
\x
). For example, for records delimited by the cent (¢
) character, specify the hex (\xC2\xA2
) value.The delimiter for RECORD_DELIMITER or FIELD_DELIMITER cannot be a substring of the delimiter for the other file format option (e.g.
FIELD_DELIMITER = 'aa' RECORD_DELIMITER = 'aabb'
).
The specified delimiter must be a valid UTF-8 character and not a random sequence of bytes. Also note that the delimiter is limited to a maximum of 20 characters.
Also accepts a value of
NONE
.- Default
- Data loading
New line character. Note that “new line” is logical such that
\r\n
will be understood as a new line for files on a Windows platform.- Data unloading
New line character (
\n
).
FIELD_DELIMITER = 'character' | NONE
- Use
Data loading, data unloading, and external tables
- Definition
One or more singlebyte or multibyte characters that separate fields in an input file (data loading) or unloaded file (data unloading). Accepts common escape sequences or the following singlebyte or multibyte characters:
- Singlebyte characters
Octal values (prefixed by
\\
) or hex values (prefixed by0x
or\x
). For example, for records delimited by the circumflex accent (^
) character, specify the octal (\\136
) or hex (0x5e
) value.- Multibyte characters
Hex values (prefixed by
\x
). For example, for records delimited by the cent (¢
) character, specify the hex (\xC2\xA2
) value.The delimiter for RECORD_DELIMITER or FIELD_DELIMITER cannot be a substring of the delimiter for the other file format option (e.g.
FIELD_DELIMITER = 'aa' RECORD_DELIMITER = 'aabb'
).Note
For non-ASCII characters, you must use the hex byte sequence value to get a deterministic behavior.
The specified delimiter must be a valid UTF-8 character and not a random sequence of bytes. Also note that the delimiter is limited to a maximum of 20 characters.
Also accepts a value of
NONE
.- Default
comma (
,
)
FILE_EXTENSION = 'string' | NONE
- Use
Data unloading only
- Definition
Specifies the extension for files unloaded to a stage. Accepts any extension. The user is responsible for specifying a file extension that can be read by any desired software or services.
- Default
null, meaning the file extension is determined by the format type:
.csv[compression]
, wherecompression
is the extension added by the compression method, ifCOMPRESSION
is set.
Note
If the
SINGLE
copy option isTRUE
, then the COPY command unloads a file without a file extension by default. To specify a file extension, provide a file name and extension in theinternal_location
orexternal_location
path (e.g.copy into @stage/data.csv
).PARSE_HEADER = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether to use the first row headers in the data files to determine column names.
This file format option is applied to the following actions only:
Automatically detecting column definitions by using the INFER_SCHEMA function.
Loading CSV data into separate columns by using the INFER_SCHEMA function and MATCH_BY_COLUMN_NAME copy option.
If the option is set to TRUE, the first row headers will be used to determine column names. The default value FALSE will return column names as c*, where * is the position of the column.
Note that the SKIP_HEADER option is not supported with PARSE_HEADER = TRUE.
Default:
FALSE
SKIP_HEADER = integer
- Use
Data loading and external tables
- Definition
Number of lines at the start of the file to skip.
Note that SKIP_HEADER does not use the RECORD_DELIMITER or FIELD_DELIMITER values to determine what a header line is; rather, it simply skips the specified number of CRLF (Carriage Return, Line Feed)-delimited lines in the file. RECORD_DELIMITER and FIELD_DELIMITER are then used to determine the rows of data to load.
- Default
0
SKIP_BLANK_LINES = TRUE | FALSE
- Use
Data loading and external tables
- Definition
Boolean that specifies to skip any blank lines encountered in the data files; otherwise, blank lines produce an end-of-record error (default behavior).
Default:
FALSE
DATE_FORMAT = 'string' | AUTO
- Use
Data loading and unloading
- Definition
Defines the format of date values in the data files (data loading) or table (data unloading). If a value is not specified or is
AUTO
, the value for the DATE_INPUT_FORMAT (data loading) or DATE_OUTPUT_FORMAT (data unloading) parameter is used.- Default
AUTO
TIME_FORMAT = 'string' | AUTO
- Use
Data loading and unloading
- Definition
Defines the format of time values in the data files (data loading) or table (data unloading). If a value is not specified or is
AUTO
, the value for the TIME_INPUT_FORMAT (data loading) or TIME_OUTPUT_FORMAT (data unloading) parameter is used.- Default
AUTO
TIMESTAMP_FORMAT = string' | AUTO
- Use
Data loading and unloading
- Definition
Defines the format of timestamp values in the data files (data loading) or table (data unloading). If a value is not specified or is
AUTO
, the value for the TIMESTAMP_INPUT_FORMAT (data loading) or TIMESTAMP_OUTPUT_FORMAT (data unloading) parameter is used.- Default
AUTO
BINARY_FORMAT = HEX | BASE64 | UTF8
- Use
Data loading and unloading
- Definition
Defines the encoding format for binary input or output. The option can be used when loading data into or unloading data from binary columns in a table.
- Default
HEX
ESCAPE = 'character' | NONE
- Use
Data loading and unloading
- Definition
A singlebyte character string used as the escape character for enclosed or unenclosed field values. An escape character invokes an alternative interpretation on subsequent characters in a character sequence. You can use the ESCAPE character to interpret instances of the
FIELD_OPTIONALLY_ENCLOSED_BY
character in the data as literals.Accepts common escape sequences, octal values, or hex values.
- Loading data
Specifies the escape character for enclosed fields only. Specify the character used to enclose fields by setting
FIELD_OPTIONALLY_ENCLOSED_BY
.Note
This file format option supports singlebyte characters only. Note that UTF-8 character encoding represents high-order ASCII characters as multibyte characters. If your data file is encoded with the UTF-8 character set, you cannot specify a high-order ASCII character as the option value.
In addition, if you specify a high-order ASCII character, we recommend that you set the
ENCODING = 'string'
file format option as the character encoding for your data files to ensure the character is interpreted correctly.- Unloading data
If this option is set, it overrides the escape character set for
ESCAPE_UNENCLOSED_FIELD
.- Default
NONE
ESCAPE_UNENCLOSED_FIELD = 'character' | NONE
- Use
Data loading, data unloading, and external tables
- Definition
A singlebyte character string used as the escape character for unenclosed field values only. An escape character invokes an alternative interpretation on subsequent characters in a character sequence. You can use the ESCAPE character to interpret instances of the
FIELD_DELIMITER
orRECORD_DELIMITER
characters in the data as literals. The escape character can also be used to escape instances of itself in the data.Accepts common escape sequences, octal values, or hex values.
- Loading data
Specifies the escape character for unenclosed fields only.
Note
The default value is
\\
. If a row in a data file ends in the backslash (\
) character, this character escapes the newline or carriage return character specified for theRECORD_DELIMITER
file format option. As a result, the load operation treats this row and the next row as a single row of data. To avoid this issue, set the value toNONE
.This file format option supports singlebyte characters only. Note that UTF-8 character encoding represents high-order ASCII characters as multibyte characters. If your data file is encoded with the UTF-8 character set, you cannot specify a high-order ASCII character as the option value.
In addition, if you specify a high-order ASCII character, we recommend that you set the
ENCODING = 'string'
file format option as the character encoding for your data files to ensure the character is interpreted correctly.
- Unloading data
If
ESCAPE
is set, the escape character set for that file format option overrides this option.- Default
backslash (
\\
)
TRIM_SPACE = TRUE | FALSE
- Use
Data loading and external tables
- Definition
Boolean that specifies whether to remove white space from fields.
For example, if your external database software encloses fields in quotes, but inserts a leading space, Snowflake reads the leading space rather than the opening quotation character as the beginning of the field (i.e. the quotation marks are interpreted as part of the string of field data). Set this option to
TRUE
to remove undesirable spaces during the data load.As another example, if leading or trailing spaces surround quotes that enclose strings, you can remove the surrounding spaces using this option and the quote character using the
FIELD_OPTIONALLY_ENCLOSED_BY
option. Note that any spaces within the quotes are preserved. For example, assumingFIELD_DELIMITER = '|'
andFIELD_OPTIONALLY_ENCLOSED_BY = '"'
:|"Hello world"| /* loads as */ >Hello world< |" Hello world "| /* loads as */ > Hello world < | "Hello world" | /* loads as */ >Hello world<
(the brackets in this example are not loaded; they are used to demarcate the beginning and end of the loaded strings)
- Default
FALSE
FIELD_OPTIONALLY_ENCLOSED_BY = 'character' | NONE
- Use
Data loading, data unloading, and external tables
- Definition
Character used to enclose strings. Value can be
NONE
, single quote character ('
), or double quote character ("
). To use the single quote character, use the octal or hex representation (0x27
) or the double single-quoted escape (''
).When a field contains this character, escape it using the same character. For example, if the value is the double quote character and a field contains the string
A "B" C
, escape the double quotes as follows:A ""B"" C
- Default
NONE
NULL_IF = ( 'string1' [ , 'string2' , ... ] )
- Use
Data loading, data unloading, and external tables
- Definition
String used to convert to and from SQL NULL:
When loading data, Snowflake replaces these values in the data load source with SQL NULL. To specify more than one string, enclose the list of strings in parentheses and use commas to separate each value.
Note that Snowflake converts all instances of the value to NULL, regardless of the data type. For example, if
2
is specified as a value, all instances of2
as either a string or number are converted.For example:
NULL_IF = ('\\N', 'NULL', 'NUL', '')
Note that this option can include empty strings.
When unloading data, Snowflake converts SQL NULL values to the first value in the list.
- Default
\\N
(i.e. NULL, which assumes theESCAPE_UNENCLOSED_FIELD
value is\\
)
ERROR_ON_COLUMN_COUNT_MISMATCH = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether to generate a parsing error if the number of delimited columns (i.e. fields) in an input file does not match the number of columns in the corresponding table.
If set to
FALSE
, an error is not generated and the load continues. If the file is successfully loaded:If the input file contains records with more fields than columns in the table, the matching fields are loaded in order of occurrence in the file and the remaining fields are not loaded.
If the input file contains records with fewer fields than columns in the table, the non-matching columns in the table are loaded with NULL values.
This option assumes all the records within the input file are the same length (i.e. a file containing records of varying length return an error regardless of the value specified for this parameter).
- Default
TRUE
Note
When transforming data during loading (i.e. using a query as the source for the COPY command), this option is ignored. There is no requirement for your data files to have the same number and ordering of columns as your target table.
REPLACE_INVALID_CHARACTERS = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (
�
).
If set to
TRUE
, Snowflake replaces invalid UTF-8 characters with the Unicode replacement character.If set to
FALSE
, the load operation produces an error when invalid UTF-8 character encoding is detected.- Default
FALSE
EMPTY_FIELD_AS_NULL = TRUE | FALSE
- Use
Data loading, data unloading, and external tables
- Definition
When loading data, specifies whether to insert SQL NULL for empty fields in an input file, which are represented by two successive delimiters (e.g.
,,
).If set to
FALSE
, Snowflake attempts to cast an empty field to the corresponding column type. An empty string is inserted into columns of type STRING. For other column types, the COPY command produces an error.When unloading data, this option is used in combination with
FIELD_OPTIONALLY_ENCLOSED_BY
. WhenFIELD_OPTIONALLY_ENCLOSED_BY = NONE
, settingEMPTY_FIELD_AS_NULL = FALSE
specifies to unload empty strings in tables to empty string values without quotes enclosing the field values.If set to
TRUE
,FIELD_OPTIONALLY_ENCLOSED_BY
must specify a character to enclose strings.
- Default
TRUE
SKIP_BYTE_ORDER_MARK = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether to skip the BOM (byte order mark), if present in a data file. A BOM is a character code at the beginning of a data file that defines the byte order and encoding form.
If set to
FALSE
, Snowflake recognizes any BOM in data files, which could result in the BOM either causing an error or being merged into the first column in the table.- Default
TRUE
ENCODING = 'string'
- Use
Data loading and external tables
- Definition
String (constant) that specifies the character set of the source data when loading data into a table.
Character Set
ENCODING
ValueSupported Languages
Notes
Big5
BIG5
Traditional Chinese
EUC-JP
EUCJP
Japanese
EUC-KR
EUCKR
Korean
GB18030
GB18030
Chinese
IBM420
IBM420
Arabic
IBM424
IBM424
Hebrew
IBM949
IBM949
Korean
ISO-2022-CN
ISO2022CN
Simplified Chinese
ISO-2022-JP
ISO2022JP
Japanese
ISO-2022-KR
ISO2022KR
Korean
ISO-8859-1
ISO88591
Danish, Dutch, English, French, German, Italian, Norwegian, Portuguese, Swedish
ISO-8859-2
ISO88592
Czech, Hungarian, Polish, Romanian
ISO-8859-5
ISO88595
Russian
ISO-8859-6
ISO88596
Arabic
ISO-8859-7
ISO88597
Greek
ISO-8859-8
ISO88598
Hebrew
ISO-8859-9
ISO88599
Turkish
ISO-8859-15
ISO885915
Danish, Dutch, English, French, German, Italian, Norwegian, Portuguese, Swedish
Identical to ISO-8859-1 except for 8 characters, including the Euro currency symbol.
KOI8-R
KOI8R
Russian
Shift_JIS
SHIFTJIS
Japanese
UTF-8
UTF8
All languages
For loading data from delimited files (CSV, TSV, etc.), UTF-8 is the default. . . For loading data from all other supported file formats (JSON, Avro, etc.), as well as unloading data, UTF-8 is the only supported character set.
UTF-16
UTF16
All languages
UTF-16BE
UTF16BE
All languages
UTF-16LE
UTF16LE
All languages
UTF-32
UTF32
All languages
UTF-32BE
UTF32BE
All languages
UTF-32LE
UTF32LE
All languages
windows-949
WINDOWS949
Korean
windows-1250
WINDOWS1250
Czech, Hungarian, Polish, Romanian
windows-1251
WINDOWS1251
Russian
windows-1252
WINDOWS1252
Danish, Dutch, English, French, German, Italian, Norwegian, Portuguese, Swedish
windows-1253
WINDOWS1253
Greek
windows-1254
WINDOWS1254
Turkish
windows-1255
WINDOWS1255
Hebrew
windows-1256
WINDOWS1256
Arabic
- Default
UTF8
Note
Snowflake stores all data internally in the UTF-8 character set. The data is converted into UTF-8 before it is loaded into Snowflake.
TYPE = JSON¶
COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE
- Use
Data loading and external tables
- Definition
When loading data, specifies the current compression algorithm for the data file. Snowflake uses this option to detect how an already-compressed data file was compressed so that the compressed data in the file can be extracted for loading.
When unloading data, compresses the data file using the specified compression algorithm.
- Values
Supported Values
Notes
AUTO
When loading data, compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. When unloading data, files are automatically compressed using the default, which is gzip.
GZIP
BZ2
BROTLI
Must be specified if loading/unloading Brotli-compressed files.
ZSTD
Zstandard v0.8 (and higher) is supported.
DEFLATE
Deflate-compressed files (with zlib header, RFC1950).
RAW_DEFLATE
Raw Deflate-compressed files (without header, RFC1951).
NONE
When loading data, indicates that the files have not been compressed. When unloading data, specifies that the unloaded files are not compressed.
- Default
AUTO
DATE_FORMAT = 'string' | AUTO
- Use
Data loading only
- Definition
Defines the format of date string values in the data files. If a value is not specified or is
AUTO
, the value for the DATE_INPUT_FORMAT parameter is used.This file format option is applied to the following actions only:
Loading JSON data into separate columns using the MATCH_BY_COLUMN_NAME copy option.
Loading JSON data into separate columns by specifying a query in the COPY statement (i.e. COPY transformation).
- Default
AUTO
TIME_FORMAT = 'string' | AUTO
- Use
Data loading only
- Definition
Defines the format of time string values in the data files. If a value is not specified or is
AUTO
, the value for the TIME_INPUT_FORMAT parameter is used.This file format option is applied to the following actions only:
Loading JSON data into separate columns using the MATCH_BY_COLUMN_NAME copy option.
Loading JSON data into separate columns by specifying a query in the COPY statement (i.e. COPY transformation).
- Default
AUTO
TIMESTAMP_FORMAT = string' | AUTO
- Use
Data loading only
- Definition
Defines the format of timestamp string values in the data files. If a value is not specified or is
AUTO
, the value for the TIMESTAMP_INPUT_FORMAT parameter is used.This file format option is applied to the following actions only:
Loading JSON data into separate columns using the MATCH_BY_COLUMN_NAME copy option.
Loading JSON data into separate columns by specifying a query in the COPY statement (i.e. COPY transformation).
- Default
AUTO
BINARY_FORMAT = HEX | BASE64 | UTF8
- Use
Data loading only
- Definition
Defines the encoding format for binary string values in the data files. The option can be used when loading data into binary columns in a table.
This file format option is applied to the following actions only:
Loading JSON data into separate columns using the MATCH_BY_COLUMN_NAME copy option.
Loading JSON data into separate columns by specifying a query in the COPY statement (i.e. COPY transformation).
- Default
HEX
TRIM_SPACE = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether to remove leading and trailing white space from strings.
For example, if your external database software encloses fields in quotes, but inserts a leading space, Snowflake reads the leading space rather than the opening quotation character as the beginning of the field (i.e. the quotation marks are interpreted as part of the string of field data). Set this option to
TRUE
to remove undesirable spaces during the data load.This file format option is applied to the following actions only when loading JSON data into separate columns using the MATCH_BY_COLUMN_NAME copy option.
- Default
FALSE
NULL_IF = ( 'string1' [ , 'string2' , ... ] )
- Use
Data loading only
- Definition
String used to convert to and from SQL NULL. Snowflake replaces these strings in the data load source with SQL NULL. To specify more than one string, enclose the list of strings in parentheses and use commas to separate each value.
This file format option is applied to the following actions only when loading JSON data into separate columns using the MATCH_BY_COLUMN_NAME copy option.
Note that Snowflake converts all instances of the value to NULL, regardless of the data type. For example, if
2
is specified as a value, all instances of2
as either a string or number are converted.For example:
NULL_IF = ('\\N', 'NULL', 'NUL', '')
Note that this option can include empty strings.
- Default
\\N
(i.e. NULL, which assumes theESCAPE_UNENCLOSED_FIELD
value is\\
)
FILE_EXTENSION = 'string' | NONE
- Use
Data unloading only
- Definition
Specifies the extension for files unloaded to a stage. Accepts any extension. The user is responsible for specifying a file extension that can be read by any desired software or services.
- Default
null, meaning the file extension is determined by the format type:
.json[compression]
, wherecompression
is the extension added by the compression method, ifCOMPRESSION
is set.
ENABLE_OCTAL = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that enables parsing of octal numbers.
- Default
FALSE
ALLOW_DUPLICATE = TRUE | FALSE
- Use
Data loading and external tables
- Definition
Boolean that specifies to allow duplicate object field names (only the last one will be preserved).
- Default
FALSE
STRIP_OUTER_ARRAY = TRUE | FALSE
- Use
Data loading and external tables
- Definition
Boolean that instructs the JSON parser to remove outer brackets (i.e.
[ ]
).- Default
FALSE
STRIP_NULL_VALUES = TRUE | FALSE
- Use
Data loading and external tables
- Definition
Boolean that instructs the JSON parser to remove object fields or array elements containing
null
values. For example, when set toTRUE
:Before
After
[null]
[]
[null,null,3]
[,,3]
{"a":null,"b":null,"c":123}
{"c":123}
{"a":[1,null,2],"b":{"x":null,"y":88}}
{"a":[1,,2],"b":{"y":88}}
- Default
FALSE
REPLACE_INVALID_CHARACTERS = TRUE | FALSE
- Use
Data loading and external table
- Definition
Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (
�
). This option performs a one-to-one character replacement.- Values
If set to
TRUE
, Snowflake replaces invalid UTF-8 characters with the Unicode replacement character.If set to
FALSE
, the load operation produces an error when invalid UTF-8 character encoding is detected.- Default
FALSE
IGNORE_UTF8_ERRORS = TRUE | FALSE
- Use
Data loading and external table
- Definition
Boolean that specifies whether UTF-8 encoding errors produce error conditions. It is an alternative syntax for
REPLACE_INVALID_CHARACTERS
.- Values
If set to
TRUE
, any invalid UTF-8 sequences are silently replaced with the Unicode characterU+FFFD
(i.e. “replacement character”).If set to
FALSE
, the load operation produces an error when invalid UTF-8 character encoding is detected.- Default
FALSE
SKIP_BYTE_ORDER_MARK = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether to skip the BOM (byte order mark), if present in a data file. A BOM is a character code at the beginning of a data file that defines the byte order and encoding form.
If set to
FALSE
, Snowflake recognizes any BOM in data files, which could result in the BOM either causing an error or being merged into the first column in the table.- Default
TRUE
TYPE = AVRO¶
COMPRESSION = AUTO | GZIP | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE
- Use
Data loading only
- Definition
When loading data, specifies the current compression algorithm for the data file. Snowflake uses this option to detect how an already-compressed data file was compressed so that the compressed data in the file can be extracted for loading.
When unloading data, compresses the data file using the specified compression algorithm.
- Values
Supported Values
Notes
AUTO
When loading data, compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. When unloading data, files are automatically compressed using the default, which is gzip.
GZIP
BROTLI
Must be specified if loading/unloading Brotli-compressed files.
ZSTD
Zstandard v0.8 (and higher) is supported.
DEFLATE
Deflate-compressed files (with zlib header, RFC1950).
RAW_DEFLATE
Raw Deflate-compressed files (without header, RFC1951).
NONE
When loading data, indicates that the files have not been compressed. When unloading data, specifies that the unloaded files are not compressed.
- Default
AUTO
.
Note
We recommend that you use the default AUTO
option because it will determine both the file and codec compression. Specifying a compression option refers to the compression of files, not the compression of blocks (codecs).
TRIM_SPACE = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether to remove leading and trailing white space from strings.
For example, if your external database software encloses fields in quotes, but inserts a leading space, Snowflake reads the leading space rather than the opening quotation character as the beginning of the field (i.e. the quotation marks are interpreted as part of the string of field data). Set this option to
TRUE
to remove undesirable spaces during the data load.This file format option is applied to the following actions only when loading Avro data into separate columns using the MATCH_BY_COLUMN_NAME copy option.
- Default
FALSE
REPLACE_INVALID_CHARACTERS = TRUE | FALSE
- Use
Data loading and external table
- Definition
Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (
�
). This option performs a one-to-one character replacement.- Values
If set to
TRUE
, Snowflake replaces invalid UTF-8 characters with the Unicode replacement character.If set to
FALSE
, the load operation produces an error when invalid UTF-8 character encoding is detected.- Default
FALSE
NULL_IF = ( 'string1' [ , 'string2' , ... ] )
- Use
Data loading only
- Definition
String used to convert to and from SQL NULL. Snowflake replaces these strings in the data load source with SQL NULL. To specify more than one string, enclose the list of strings in parentheses and use commas to separate each value.
This file format option is applied to the following actions only when loading Avro data into separate columns using the MATCH_BY_COLUMN_NAME copy option.
Note that Snowflake converts all instances of the value to NULL, regardless of the data type. For example, if
2
is specified as a value, all instances of2
as either a string or number are converted.For example:
NULL_IF = ('\\N', 'NULL', 'NUL', '')
Note that this option can include empty strings.
- Default
\\N
(i.e. NULL, which assumes theESCAPE_UNENCLOSED_FIELD
value is\\
)
TYPE = ORC¶
TRIM_SPACE = TRUE | FALSE
- Use
Data loading and external tables
- Definition
Boolean that specifies whether to remove leading and trailing white space from strings.
For example, if your external database software encloses fields in quotes, but inserts a leading space, Snowflake reads the leading space rather than the opening quotation character as the beginning of the field (i.e. the quotation marks are interpreted as part of the string of field data). Set this option to
TRUE
to remove undesirable spaces during the data load.This file format option is applied to the following actions only when loading Orc data into separate columns using the MATCH_BY_COLUMN_NAME copy option.
- Default
FALSE
REPLACE_INVALID_CHARACTERS = TRUE | FALSE
- Use
Data loading and external table
- Definition
Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (
�
). This option performs a one-to-one character replacement.- Values
If set to
TRUE
, Snowflake replaces invalid UTF-8 characters with the Unicode replacement character.If set to
FALSE
, the load operation produces an error when invalid UTF-8 character encoding is detected.- Default
FALSE
NULL_IF = ( 'string1' [ , 'string2' , ... ] )
- Use
Data loading and external tables
- Definition
String used to convert to and from SQL NULL. Snowflake replaces these strings in the data load source with SQL NULL. To specify more than one string, enclose the list of strings in parentheses and use commas to separate each value.
This file format option is applied to the following actions only when loading Orc data into separate columns using the MATCH_BY_COLUMN_NAME copy option.
Note that Snowflake converts all instances of the value to NULL, regardless of the data type. For example, if
2
is specified as a value, all instances of2
as either a string or number are converted.For example:
NULL_IF = ('\\N', 'NULL', 'NUL', '')
Note that this option can include empty strings.
- Default
\\N
(i.e. NULL, which assumes theESCAPE_UNENCLOSED_FIELD
value is\\
)
TYPE = PARQUET¶
COMPRESSION = AUTO | LZO | SNAPPY | NONE
- Use
Data loading, data unloading, and external tables
- Definition
When loading data, specifies the current compression algorithm for columns in the Parquet files.
When unloading data, compresses the data file using the specified compression algorithm.
- Values
Supported Values
Notes
AUTO
When loading data, compression algorithm detected automatically. Supports the following compression algorithms: Brotli, gzip, Lempel-Ziv-Oberhumer (LZO), LZ4, Snappy, or Zstandard v0.8 (and higher). . When unloading data, unloaded files are compressed using the Snappy compression algorithm by default.
LZO
When unloading data, files are compressed using the Snappy algorithm by default. If unloading data to LZO-compressed files, specify this value.
SNAPPY
When unloading data, files are compressed using the Snappy algorithm by default. You can optionally specify this value.
NONE
When loading data, indicates that the files have not been compressed. When unloading data, specifies that the unloaded files are not compressed.
- Default
AUTO
SNAPPY_COMPRESSION = TRUE | FALSE
- Use
Data unloading only
Supported Values
Notes
AUTO
Unloaded files are compressed using the Snappy compression algorithm by default.
SNAPPY
May be specified if unloading Snappy-compressed files.
NONE
When loading data, indicates that the files have not been compressed. When unloading data, specifies that the unloaded files are not compressed.
- Definition
Boolean that specifies whether unloaded file(s) are compressed using the SNAPPY algorithm.
Note
Deprecated. Use
COMPRESSION = SNAPPY
instead.- Limitations
Only supported for data unloading operations.
- Default
TRUE
BINARY_AS_TEXT = TRUE | FALSE
- Use
Data loading and external tables
- Definition
Boolean that specifies whether to interpret columns with no defined logical data type as UTF-8 text. When set to
FALSE
, Snowflake interprets these columns as binary data.- Default
TRUE
Note
Snowflake recommends that you set BINARY_AS_TEXT to FALSE to avoid any potential conversion issues.
TRIM_SPACE = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether to remove leading and trailing white space from strings.
For example, if your external database software encloses fields in quotes, but inserts a leading space, Snowflake reads the leading space rather than the opening quotation character as the beginning of the field (i.e. the quotation marks are interpreted as part of the string of field data). Set this option to
TRUE
to remove undesirable spaces during the data load.This file format option is applied to the following actions only when loading Parquet data into separate columns using the MATCH_BY_COLUMN_NAME copy option.
- Default
FALSE
USE_LOGICAL_TYPE = TRUE | FALSE
- Use
Data loading, data querying in staged files, and schema detection.
- Definition
Boolean that specifies whether to use Parquet logical types. With this file format option, Snowflake can interpret Parquet logical types during data loading. For more information, see Parquet Logical Type Definitions. To enable Parquet logical types, set USE_LOGICAL_TYPE as TRUE when you create a new file format option.
- Limitations
Not supported for data unloading.
REPLACE_INVALID_CHARACTERS = TRUE | FALSE
- Use
Data loading and external table
- Definition
Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (
�
). This option performs a one-to-one character replacement.- Values
If set to
TRUE
, Snowflake replaces invalid UTF-8 characters with the Unicode replacement character.If set to
FALSE
, the load operation produces an error when invalid UTF-8 character encoding is detected.- Default
FALSE
NULL_IF = ( 'string1' [ , 'string2' , ... ] )
- Use
Data loading only
- Definition
String used to convert to and from SQL NULL. Snowflake replaces these strings in the data load source with SQL NULL. To specify more than one string, enclose the list of strings in parentheses and use commas to separate each value.
This file format option is applied to the following actions only when loading Parquet data into separate columns using the MATCH_BY_COLUMN_NAME copy option.
Note that Snowflake converts all instances of the value to NULL, regardless of the data type. For example, if
2
is specified as a value, all instances of2
as either a string or number are converted.For example:
NULL_IF = ('\\N', 'NULL', 'NUL', '')
Note that this option can include empty strings.
- Default
\\N
(i.e. NULL, which assumes theESCAPE_UNENCLOSED_FIELD
value is\\
)
TYPE = XML¶
COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE
- Use
Data loading only
- Definition
When loading data, specifies the current compression algorithm for the data file. Snowflake uses this option to detect how an already-compressed data file was compressed so that the compressed data in the file can be extracted for loading.
When unloading data, compresses the data file using the specified compression algorithm.
- Values
Supported Values
Notes
AUTO
When loading data, compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. When unloading data, files are automatically compressed using the default, which is gzip.
GZIP
BZ2
BROTLI
Must be specified if loading/unloading Brotli-compressed files.
ZSTD
Zstandard v0.8 (and higher) is supported.
DEFLATE
Deflate-compressed files (with zlib header, RFC1950).
RAW_DEFLATE
Raw Deflate-compressed files (without header, RFC1951).
NONE
When loading data, indicates that the files have not been compressed. When unloading data, specifies that the unloaded files are not compressed.
- Default
AUTO
IGNORE_UTF8_ERRORS = TRUE | FALSE
- Use
Data loading and external table
- Definition
Boolean that specifies whether UTF-8 encoding errors produce error conditions. It is an alternative syntax for
REPLACE_INVALID_CHARACTERS
.- Values
If set to
TRUE
, any invalid UTF-8 sequences are silently replaced with the Unicode characterU+FFFD
(i.e. “replacement character”).If set to
FALSE
, the load operation produces an error when invalid UTF-8 character encoding is detected.- Default
FALSE
PRESERVE_SPACE = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether the XML parser preserves leading and trailing spaces in element content.
- Default
FALSE
STRIP_OUTER_ELEMENT = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether the XML parser strips out the outer XML element, exposing 2nd level elements as separate documents.
- Default
FALSE
DISABLE_SNOWFLAKE_DATA = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether the XML parser disables recognition of Snowflake semi-structured data tags.
- Default
FALSE
DISABLE_AUTO_CONVERT = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether the XML parser disables automatic conversion of numeric and Boolean values from text to native representation.
- Default
FALSE
REPLACE_INVALID_CHARACTERS = TRUE | FALSE
- Use
Data loading and external table
- Definition
Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (
�
). This option performs a one-to-one character replacement.- Values
If set to
TRUE
, Snowflake replaces invalid UTF-8 characters with the Unicode replacement character.If set to
FALSE
, the load operation produces an error when invalid UTF-8 character encoding is detected.- Default
FALSE
SKIP_BYTE_ORDER_MARK = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether to skip any BOM (byte order mark) present in an input file. A BOM is a character code at the beginning of a data file that defines the byte order and encoding form.
If set to
FALSE
, Snowflake recognizes any BOM in data files, which could result in the BOM either causing an error or being merged into the first column in the table.- Default
TRUE
Copy Options (copyOptions
)¶
Copy options are used for loading data into and unloading data out of tables.
You can specify one or more of the following copy options (separated by blank spaces, commas, or new lines):
ON_ERROR = CONTINUE | SKIP_FILE | SKIP_FILE_num | 'SKIP_FILE_num%' | ABORT_STATEMENT
- Use
Data loading only
- Definition
String (constant) that specifies the error handling for the load operation.
Important
Carefully consider the ON_ERROR copy option value. The default value is appropriate in common scenarios, but is not always the best option.
- Values
CONTINUE
Continue to load the file if errors are found. The COPY statement returns an error message for a maximum of one error found per data file.
Note that the difference between the ROWS_PARSED and ROWS_LOADED column values represents the number of rows that include detected errors. However, each of these rows could include multiple errors. To view all errors in the data files, use the VALIDATION_MODE parameter or query the VALIDATE function.
SKIP_FILE
Skip a file when an error is found.
Note that the
SKIP_FILE
action buffers an entire file whether errors are found or not. For this reason,SKIP_FILE
is slower than eitherCONTINUE
orABORT_STATEMENT
. Skipping large files due to a small number of errors could result in delays and wasted credits. When loading large numbers of records from files that have no logical delineation (e.g. the files were generated automatically at rough intervals), consider specifyingCONTINUE
instead.Additional patterns:
SKIP_FILE_num
(e.g.SKIP_FILE_10
)Skip a file when the number of error rows found in the file is equal to or exceeds the specified number.
'SKIP_FILE_num%'
(e.g.'SKIP_FILE_10%'
)Skip a file when the percentage of error rows found in the file exceeds the specified percentage.
ABORT_STATEMENT
Abort the load operation if any error is found in a data file.
Note that the load operation is not aborted if the data file cannot be found (e.g. because it does not exist or cannot be accessed), except when data files explicitly specified in the
FILES
parameter cannot be found.
- Default
- Bulk loading using COPY
ABORT_STATEMENT
- Snowpipe
SKIP_FILE
SIZE_LIMIT = num
- Use
Data loading only
- Definition
Number (> 0) that specifies the maximum size (in bytes) of data to be loaded for a given COPY statement. When the threshold is exceeded, the COPY operation discontinues loading files. This option is commonly used to load a common group of files using multiple COPY statements. For each statement, the data load continues until the specified
SIZE_LIMIT
is exceeded, before moving on to the next statement.For example, suppose a set of files in a stage path were each 10 MB in size. If multiple COPY statements set SIZE_LIMIT to
25000000
(25 MB), each would load 3 files. That is, each COPY operation would discontinue after theSIZE_LIMIT
threshold was exceeded.Note that at least one file is loaded regardless of the value specified for
SIZE_LIMIT
unless there is no file to be loaded.- Default
null (no size limit)
PURGE = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether to remove the data files from the stage automatically after the data is loaded successfully.
If this option is set to
TRUE
, note that a best effort is made to remove successfully loaded data files. If the purge operation fails for any reason, no error is returned currently. We recommend that you list staged files periodically (using LIST) and manually remove successfully loaded files, if any exist.- Default
FALSE
RETURN_FAILED_ONLY = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies whether to return only files that have failed to load in the statement result.
- Default
FALSE
MATCH_BY_COLUMN_NAME = CASE_SENSITIVE | CASE_INSENSITIVE | NONE
- Use
Data loading only
- Definition
String that specifies whether to load semi-structured data into columns in the target table that match corresponding columns represented in the data.
This copy option is supported for the following data formats:
JSON
Avro
ORC
Parquet
For a column to match, the following criteria must be true:
The column represented in the data must have the exact same name as the column in the table. The copy option supports case sensitivity for column names. Column order does not matter.
The column in the table must have a data type that is compatible with the values in the column represented in the data. For example, string, number, and Boolean values can all be loaded into a variant column.
- Values
CASE_SENSITIVE
|CASE_INSENSITIVE
Load semi-structured data into columns in the target table that match corresponding columns represented in the data. Column names are either case-sensitive (
CASE_SENSITIVE
) or case-insensitive (CASE_INSENSITIVE
).The COPY operation verifies that at least one column in the target table matches a column represented in the data files. If a match is found, the values in the data files are loaded into the column or columns. If no match is found, a set of NULL values for each record in the files is loaded into the table.
Note
If additional non-matching columns are present in the data files, the values in these columns are not loaded.
If additional non-matching columns are present in the target table, the COPY operation inserts NULL values into these columns. These columns must support NULL values.
The COPY statement does not allow specifying a query to further transform the data during the load (i.e. COPY transformation).
NONE
The COPY operation loads the semi-structured data into a variant column or, if a query is included in the COPY statement, transforms the data.
Note
The following limitations currently apply:
MATCH_BY_COLUMN_NAME cannot be used with the
VALIDATION_MODE
parameter in a COPY statement to validate the staged data rather than load it into the target table.Parquet data only. When MATCH_BY_COLUMN_NAME is set to
CASE_SENSITIVE
orCASE_INSENSITIVE
, an empty column value (e.g."col1": ""
) produces an error.
- Default
NONE
ENFORCE_LENGTH = TRUE | FALSE
- Use
Data loading only
- Definition
Alternative syntax for
TRUNCATECOLUMNS
with reverse logic (for compatibility with other systems)Boolean that specifies whether to truncate text strings that exceed the target column length:
If
TRUE
, the COPY statement produces an error if a loaded string exceeds the target column length.If
FALSE
, strings are automatically truncated to the target column length.
This copy option supports CSV data, as well as string values in semi-structured data when loaded into separate columns in relational tables.
Note
If the length of the target string column is set to the maximum (e.g.
VARCHAR (16777216)
), an incoming string cannot exceed this length; otherwise, the COPY command produces an error.This parameter is functionally equivalent to
TRUNCATECOLUMNS
, but has the opposite behavior. It is provided for compatibility with other databases. It is only necessary to include one of these two parameters in a COPY statement to produce the desired output.
- Default
TRUE
TRUNCATECOLUMNS = TRUE | FALSE
- Use
Data loading only
- Definition
Alternative syntax for
ENFORCE_LENGTH
with reverse logic (for compatibility with other systems)Boolean that specifies whether to truncate text strings that exceed the target column length:
If
TRUE
, strings are automatically truncated to the target column length.If
FALSE
, the COPY statement produces an error if a loaded string exceeds the target column length.
This copy option supports CSV data, as well as string values in semi-structured data when loaded into separate columns in relational tables.
Note
If the length of the target string column is set to the maximum (e.g.
VARCHAR (16777216)
), an incoming string cannot exceed this length; otherwise, the COPY command produces an error.This parameter is functionally equivalent to
ENFORCE_LENGTH
, but has the opposite behavior. It is provided for compatibility with other databases. It is only necessary to include one of these two parameters in a COPY statement to produce the desired output.
- Default
FALSE
FORCE = TRUE | FALSE
- Use
Data loading only
- Definition
Boolean that specifies to load all files, regardless of whether they’ve been loaded previously and have not changed since they were loaded. Note that this option reloads files, potentially duplicating data in a table.
- Default
FALSE
Usage Notes¶
Changes to a table are not automatically propagated to views created on that table. For example, if you drop a column in a table, and a view is defined to include that column, the view becomes invalid; the view is not adjusted to remove the column.
Dropping a column does not necessarily free up the column’s storage space immediately.
The space in each micro-partition is not reclaimed until that micro-partition is re-written. Write operations (insert, update, delete, etc.) on 1 or more rows in that micro-partition cause the micro-partition to be re-written. If you want to force space to be reclaimed, you can forcibly update every row in the table, or you can use
CREATE TABLE AS ...
to create a new table that contains only the columns of the old table that you want to keep, and then you can drop the old table. Either of these operations can be expensive if the remaining volume of data in the table is large.If the table is protected by the Time Travel feature, the space used by the Time Travel storage is not reclaimed until the Time Travel retention period expires.
If a new column with a default value is added to a table with existing rows, all of the existing rows are populated with the default value.
Adding a new column with a default value containing a function is not currently supported. The following error is returned:
Invalid column default expression (expr)
To alter a table, you must be using a role that has ownership privilege on the table.
To add clustering to a table, you must also have USAGE or OWNERSHIP privileges on the schema and database that contain the table.
For masking policies:
The
USING
clause and theFORCE
keyword are both optional; neither are required to set a masking policy on a column. TheUSING
clause and theFORCE
keyword can be used separately or together. For details, see:A single masking policy that uses conditional columns can be applied to multiple tables provided that the column structure of the table matches the columns specified in the policy.
When modifying one or more table columns with a masking policy or the table itself with a row access policy, use the POLICY_CONTEXT function to simulate a query on the column(s) protected by a masking policy and the table protected by a row access policy.
For row access policies:
Snowflake supports adding and dropping row access policies in a single SQL statement.
For example, to replace a row access policy that is already set on a table with a different policy, drop the row access policy first and then add the new row access policy.
For a given resource (i.e. table or view), to
ADD
orDROP
a row access policy you must have either the APPLY ROW ACCESS POLICY privilege on the schema, or the OWNERSHIP privilege on the resource and the APPLY privilege on the row access policy resource.A table or view can only be protected by one row access policy at a time. Adding a policy fails if the policy body refers to a table or view column that is protected by a row access policy or the column protected by a masking policy.
Similarly, adding a masking policy to a table column fails if the masking policy body refers to a table that is protected by a row access policy or another masking policy.
Row access policies cannot be applied to system views or table functions.
Similar to other DROP <object> operations, Snowflake returns an error if attempting to drop a row access policy from a resource that does not have a row access policy added to it.
If an object has both a row access policy and one or more masking policies, the row access policy is evaluated first.
If you create a foreign key, then the columns in the
REFERENCES
clause must be listed in the same order as the columns were listed for the primary key. For example:create table parent ... constraint primary_key_1 primary key (c_1, c_2) ... create table child ... constraint foreign_key_1 foreign key (...) REFERENCES parent (c_1, c_2) ...
In both cases, the order of the columns is
c_1, c_2
. If the order of the columns in the foreign key had been different (e.g.c_2, c_1
), then the attempt to create the foreign key would have failed.Regarding metadata:
Attention
Customers should ensure that no personal data (other than for a User object), sensitive data, export-controlled data, or other regulated data is entered as metadata when using the Snowflake service. For more information, see Metadata Fields in Snowflake.
ALTER TABLE … CHANGE_TRACKING = TRUE
When a table is altered to enable change tracking, the table is locked for the duration of the operation. Locks can cause latency with some associated DDL/DML operations. For more information, refer to Resource Locking.
Examples¶
The following sections provide examples of using the ALTER COLUMN command:
Renaming a Table¶
The following creates a table named t1
:
CREATE OR REPLACE TABLE t1(a1 number);
SHOW TABLES LIKE 't1';
+-------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------+-----------------+-------------+-------------------------+-----------------+----------+--------+
| created_on | name | database_name | schema_name | kind | comment | cluster_by | rows | bytes | owner | retention_time | change_tracking | is_external | enable_schema_evolution | owner_role_type | is_event | budget |
|-------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------+-----------------+-------------+-------------------------+-----------------+----------+--------|
| 2023-10-19 10:37:04.858 -0700 | T1 | TESTDB | MY_SCHEMA | TABLE | | | 0 | 0 | PUBLIC | 1 | OFF | N | N | ROLE | N | NULL |
+-------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------+-----------------+-------------+-------------------------+-----------------+----------+--------+
The following statement changes the name of the table to tt1
:
ALTER TABLE t1 RENAME TO tt1;
SHOW TABLES LIKE 'tt1';
+-------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------+-----------------+-------------+-------------------------+-----------------+----------+--------+
| created_on | name | database_name | schema_name | kind | comment | cluster_by | rows | bytes | owner | retention_time | change_tracking | is_external | enable_schema_evolution | owner_role_type | is_event | budget |
|-------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------+-----------------+-------------+-------------------------+-----------------+----------+--------|
| 2023-10-19 10:37:04.858 -0700 | TT1 | TESTDB | MY_SCHEMA | TABLE | | | 0 | 0 | PUBLIC | 1 | OFF | N | N | ROLE | N | NULL |
+-------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------+-----------------+-------------+-------------------------+-----------------+----------+--------+
Swapping Tables¶
The following statements create tables named t1
and t2
:
CREATE OR REPLACE TABLE t1(a1 NUMBER, a2 VARCHAR, a3 DATE);
CREATE OR REPLACE TABLE t2(b1 VARCHAR);
DESC TABLE t1;
+------+-------------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment | policy name |
|------+-------------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------|
| A1 | NUMBER(38,0) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
| A2 | VARCHAR(16777216) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
| A3 | DATE | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
+------+-------------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
DESC TABLE t2;
+------+-------------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment | policy name |
|------+-------------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------|
| B1 | VARCHAR(16777216) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
+------+-------------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
The following statement swaps table t1
with table t2
:
ALTER TABLE t1 SWAP WITH t2;
DESC TABLE t1;
+------+-------------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment | policy name |
|------+-------------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------|
| B1 | VARCHAR(16777216) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
+------+-------------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
DESC TABLE t2;
+------+-------------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment | policy name |
|------+-------------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------|
| A1 | NUMBER(38,0) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
| A2 | VARCHAR(16777216) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
| A3 | DATE | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
+------+-------------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
Adding Columns¶
The following creates a table named t1
:
CREATE OR REPLACE TABLE t1(a1 NUMBER);
DESC TABLE t1;
+------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment | policy name |
|------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------|
| A1 | NUMBER(38,0) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
+------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
The following statement adds a column named a2
to this table:
ALTER TABLE t1 ADD COLUMN a2 NUMBER;
The following statement adds a column named a3
with a NOT NULL constraint:
ALTER TABLE t1 ADD COLUMN a3 NUMBER NOT NULL;
The following statement adds a column named a3
with a default value and a NOT NULL constraint:
ALTER TABLE t1 ADD COLUMN a4 NUMBER DEFAULT 0 NOT NULL;
DESC TABLE t1;
+------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment | policy name |
|------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------|
| A1 | NUMBER(38,0) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
| A2 | NUMBER(38,0) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
| A3 | NUMBER(38,0) | COLUMN | N | NULL | N | N | NULL | NULL | NULL | NULL |
| A4 | NUMBER(38,0) | COLUMN | N | 0 | N | N | NULL | NULL | NULL | NULL |
+------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
The following statement uses the IF NOT EXISTS clause to add a column named a2
only if the column does not exist. There is
an existing column named a2
. Specifying the IF NOT EXISTS clause prevents the statement from failing with an error.
ALTER TABLE t1 ADD COLUMN IF NOT EXISTS a2 NUMBER;
As shown in the output of the DESCRIBE TABLE command, the statement above has no effect on the existing column named a2
:
DESC TABLE t1;
+------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment | policy name |
|------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------|
| A1 | NUMBER(38,0) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
| A2 | NUMBER(38,0) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
| A3 | NUMBER(38,0) | COLUMN | N | NULL | N | N | NULL | NULL | NULL | NULL |
| A4 | NUMBER(38,0) | COLUMN | N | 0 | N | N | NULL | NULL | NULL | NULL |
+------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
Renaming Columns¶
The following statement changes the name of the column a1
to b1
:
ALTER TABLE t1 RENAME COLUMN a1 TO b1;
DESC TABLE t1;
+------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment | policy name |
|------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------|
| B1 | NUMBER(38,0) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
| A2 | NUMBER(38,0) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
| A3 | NUMBER(38,0) | COLUMN | N | NULL | N | N | NULL | NULL | NULL | NULL |
| A4 | NUMBER(38,0) | COLUMN | N | 0 | N | N | NULL | NULL | NULL | NULL |
+------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
Dropping Columns¶
The following statement drops the column a2
:
ALTER TABLE t1 DROP COLUMN a2;
DESC TABLE t1;
+------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment | policy name |
|------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------|
| B1 | NUMBER(38,0) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
| A3 | NUMBER(38,0) | COLUMN | N | NULL | N | N | NULL | NULL | NULL | NULL |
| A4 | NUMBER(38,0) | COLUMN | N | 0 | N | N | NULL | NULL | NULL | NULL |
+------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
The following statement uses the IF EXISTS clause to drop a column named a2
only if the column exists. There is no existing
column named a2
. Specifying the IF EXISTS clause prevents the statement from failing with an error.
ALTER TABLE t1 DROP COLUMN IF EXISTS a2;
As shown in the output of the DESCRIBE TABLE command, the statement above has no effect on the existing table:
+------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment | policy name |
|------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------|
| B1 | NUMBER(38,0) | COLUMN | Y | NULL | N | N | NULL | NULL | NULL | NULL |
| A3 | NUMBER(38,0) | COLUMN | N | NULL | N | N | NULL | NULL | NULL | NULL |
| A4 | NUMBER(38,0) | COLUMN | N | 0 | N | N | NULL | NULL | NULL | NULL |
+------+--------------+--------+-------+---------+-------------+------------+-------+------------+---------+-------------+
Adding, Renaming, and Dropping Columns in an External Table¶
The following statement creates an external table named exttable1
:
CREATE EXTERNAL TABLE exttable1
LOCATION=@mystage/logs/
AUTO_REFRESH = true
FILE_FORMAT = (TYPE = PARQUET)
;
DESC EXTERNAL TABLE exttable1;
+-----------+-------------------+-----------+-------+---------+-------------+------------+-------+----------------------------------------------------------+-----------------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment |
|-----------+-------------------+-----------+-------+---------+-------------+------------+-------+----------------------------------------------------------+-----------------------|
| VALUE | VARIANT | COLUMN | Y | NULL | N | N | NULL | NULL | The value of this row |
+-----------+-------------------+-----------+-------+---------+-------------+------------+-------+----------------------------------------------------------+-----------------------+
The following statement adds a new column named a1
to the external table:
ALTER TABLE exttable1 ADD COLUMN a1 VARCHAR AS (value:a1::VARCHAR);
DESC EXTERNAL TABLE exttable1;
+-----------+-------------------+-----------+-------+---------+-------------+------------+-------+----------------------------------------------------------+-----------------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment |
|-----------+-------------------+-----------+-------+---------+-------------+------------+-------+----------------------------------------------------------+-----------------------|
| VALUE | VARIANT | COLUMN | Y | NULL | N | N | NULL | NULL | The value of this row |
| A1 | VARCHAR(16777216) | VIRTUAL | Y | NULL | N | N | NULL | TO_CHAR(GET(VALUE, 'a1')) | NULL |
+-----------+-------------------+-----------+-------+---------+-------------+------------+-------+----------------------------------------------------------+-----------------------+
The following statement changes the name of the a1
column to b1
:
ALTER TABLE exttable1 RENAME COLUMN a1 TO b1;
DESC EXTERNAL TABLE exttable1;
+-----------+-------------------+-----------+-------+---------+-------------+------------+-------+----------------------------------------------------------+-----------------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment |
|-----------+-------------------+-----------+-------+---------+-------------+------------+-------+----------------------------------------------------------+-----------------------|
| VALUE | VARIANT | COLUMN | Y | NULL | N | N | NULL | NULL | The value of this row |
| B1 | VARCHAR(16777216) | VIRTUAL | Y | NULL | N | N | NULL | TO_CHAR(GET(VALUE, 'a1')) | NULL |
+-----------+-------------------+-----------+-------+---------+-------------+------------+-------+----------------------------------------------------------+-----------------------+
The following statement drops the column named b1
:
ALTER TABLE exttable1 DROP COLUMN b1;
DESC EXTERNAL TABLE exttable1;
+-----------+-------------------+-----------+-------+---------+-------------+------------+-------+----------------------------------------------------------+-----------------------+
| name | type | kind | null? | default | primary key | unique key | check | expression | comment |
|-----------+-------------------+-----------+-------+---------+-------------+------------+-------+----------------------------------------------------------+-----------------------|
| VALUE | VARIANT | COLUMN | Y | NULL | N | N | NULL | NULL | The value of this row |
+-----------+-------------------+-----------+-------+---------+-------------+------------+-------+----------------------------------------------------------+-----------------------+
Changing the Order of Clustering Keys¶
The following statement creates a table named t1
that clusters by the id
and date
columns:
CREATE OR REPLACE TABLE T1 (id NUMBER, date TIMESTAMP_NTZ, name STRING) CLUSTER BY (id, date);
SHOW TABLES LIKE 'T1';
---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------------+----------------+
created_on | name | database_name | schema_name | kind | comment | cluster_by | rows | bytes | owner | retention_time |
---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------------+----------------+
Tue, 21 Jun 2016 15:42:12 -0700 | T1 | TESTDB | TESTSCHEMA | TABLE | | (ID,DATE) | 0 | 0 | ACCOUNTADMIN | 1 |
---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------------+----------------+
The following statement changes the order of the clustering key:
ALTER TABLE t1 CLUSTER BY (date, id);
SHOW TABLES LIKE 'T1';
---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------------+----------------+
created_on | name | database_name | schema_name | kind | comment | cluster_by | rows | bytes | owner | retention_time |
---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------------+----------------+
Tue, 21 Jun 2016 15:42:12 -0700 | T1 | TESTDB | TESTSCHEMA | TABLE | | (DATE,ID) | 0 | 0 | ACCOUNTADMIN | 1 |
---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------------+----------------+
Adding and Dropping Row Access Policies¶
The following example adds a row access policy on a table while specifying a single column. After setting the policy, you can verify by checking the information schema.
alter table t1 add row access policy rap_t1 on (empl_id);
The following example adds a row access policy while specifying two columns in a single table.
alter table t1 add row access policy rap_test2 on (cost, item);
The following example drops a row access policy from a table. Verify the policies were dropped by querying the information schema.
alter table t1 drop row access policy rap_v1;
The following example shows how to combine adding and dropping row access policies in a single SQL statement for a table. Verify the results by checking the information schema.
alter table t1 drop row access policy rap_t1_version_1, add row access policy rap_t1_version_2 on (empl_id);