Categories:

COPY INTO <table>¶

Loads data from staged files to an existing table. The files must already be staged in one of the following locations:

• Named internal stage (or table/user stage). Files can be staged using the PUT command.

• Named external stage that references an external location (Amazon S3, Google Cloud Storage, or Microsoft Azure).

• External location (Amazon S3, Google Cloud Storage, or Microsoft Azure).

COPY INTO <location>

In this Topic:

Syntax¶

/* Standard data load */
COPY INTO [<namespace>.]<table_name>
FROM { internalStage | externalStage | externalLocation }
[ FILES = ( '<file_name>' [ , '<file_name>' ] [ , ... ] ) ]
[ PATTERN = '<regex_pattern>' ]
[ FILE_FORMAT = ( { FORMAT_NAME = '[<namespace>.]<file_format_name>' |
TYPE = { CSV | JSON | AVRO | ORC | PARQUET | XML } [ formatTypeOptions ] } ) ]
[ copyOptions ]
[ VALIDATION_MODE = RETURN_<n>_ROWS | RETURN_ERRORS | RETURN_ALL_ERRORS ]

/* Data load with transformation */
COPY INTO [<namespace>.]<table_name> [ ( <col_name> [ , <col_name> ... ] ) ]
FROM ( SELECT [<alias>.]$<file_col_num>[.<element>] [ , [<alias>.]$<file_col_num>[.<element>] ... ]
FROM { internalStage | externalStage } )
[ FILES = ( '<file_name>' [ , '<file_name>' ] [ , ... ] ) ]
[ PATTERN = '<regex_pattern>' ]
[ FILE_FORMAT = ( { FORMAT_NAME = '[<namespace>.]<file_format_name>' |
TYPE = { CSV | JSON | AVRO | ORC | PARQUET | XML } [ formatTypeOptions ] } ) ]
[ copyOptions ]


Where:

internalStage ::=
@[<namespace>.]<int_stage_name>[/<path>]
| @[<namespace>.]%<table_name>[/<path>]
| @~[/<path>]

externalStage ::=
@[<namespace>.]<ext_stage_name>[/<path>]

externalLocation (for Amazon S3) ::=
's3://<bucket>[/<path>]'
[ { STORAGE_INTEGRATION = <integration_name> } | { CREDENTIALS = ( {  { AWS_KEY_ID = '<string>' AWS_SECRET_KEY = '<string>' [ AWS_TOKEN = '<string>' ] } } ) } ]
[ ENCRYPTION = ( [ TYPE = 'AWS_CSE' ] [ MASTER_KEY = '<string>' ] |
[ TYPE = 'AWS_SSE_S3' ] |
[ TYPE = 'AWS_SSE_KMS' [ KMS_KEY_ID = '<string>' ] |
[ TYPE = NONE ] ) ]

externalLocation (for Google Cloud Storage) ::=
'gcs://<bucket>[/<path>]'
[ STORAGE_INTEGRATION = <integration_name> ]
[ ENCRYPTION = ( [ TYPE = 'GCS_SSE_KMS' ] [ KMS_KEY_ID = '<string>' ] | [ TYPE = NONE ] ) ]

externalLocation (for Microsoft Azure) ::=
'azure://<account>.blob.core.windows.net/<container>[/<path>]'
[ { STORAGE_INTEGRATION = <integration_name> } | { CREDENTIALS = ( [ AZURE_SAS_TOKEN = <string> ] ) } ]
[ ENCRYPTION = ( [ TYPE = { 'AZURE_CSE' | NONE } ] [ MASTER_KEY = '<string>' ] ) ]

formatTypeOptions ::=
-- If FILE_FORMAT = ( TYPE = CSV ... )
COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE
RECORD_DELIMITER = '<character>' | NONE
FIELD_DELIMITER = '<character>' | NONE
SKIP_BLANK_LINES = TRUE | FALSE
DATE_FORMAT = '<string>' | AUTO
TIME_FORMAT = '<string>' | AUTO
TIMESTAMP_FORMAT = '<string>' | AUTO
BINARY_FORMAT = HEX | BASE64 | UTF8
ESCAPE = '<character>' | NONE
ESCAPE_UNENCLOSED_FIELD = '<character>' | NONE
TRIM_SPACE = TRUE | FALSE
FIELD_OPTIONALLY_ENCLOSED_BY = '<character>' | NONE
NULL_IF = ( '<string>' [ , '<string>' ... ] )
ERROR_ON_COLUMN_COUNT_MISMATCH = TRUE | FALSE
REPLACE_INVALID_CHARACTERS = TRUE | FALSE
VALIDATE_UTF8 = TRUE | FALSE
EMPTY_FIELD_AS_NULL = TRUE | FALSE
SKIP_BYTE_ORDER_MARK = TRUE | FALSE
ENCODING = '<string>' | UTF8
-- If FILE_FORMAT = ( TYPE = JSON ... )
COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE
DATE_FORMAT = '<string>' | AUTO
TIME_FORMAT = '<string>' | AUTO
TIMESTAMP_FORMAT = '<string>' | AUTO
BINARY_FORMAT = HEX | BASE64 | UTF8
TRIM_SPACE = TRUE | FALSE
NULL_IF = ( '<string>' [ , '<string>' ... ] )
ENABLE_OCTAL = TRUE | FALSE
ALLOW_DUPLICATE = TRUE | FALSE
STRIP_OUTER_ARRAY = TRUE | FALSE
STRIP_NULL_VALUES = TRUE | FALSE
REPLACE_INVALID_CHARACTERS = TRUE | FALSE
IGNORE_UTF8_ERRORS = TRUE | FALSE
SKIP_BYTE_ORDER_MARK = TRUE | FALSE
-- If FILE_FORMAT = ( TYPE = AVRO ... )
COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE
TRIM_SPACE = TRUE | FALSE
NULL_IF = ( '<string>' [ , '<string>' ... ] )
-- If FILE_FORMAT = ( TYPE = ORC ... )
TRIM_SPACE = TRUE | FALSE
NULL_IF = ( '<string>' [ , '<string>' ... ] )
-- If FILE_FORMAT = ( TYPE = PARQUET ... )
COMPRESSION = AUTO | SNAPPY | NONE
BINARY_AS_TEXT = TRUE | FALSE
TRIM_SPACE = TRUE | FALSE
NULL_IF = ( '<string>' [ , '<string>' ... ] )
-- If FILE_FORMAT = ( TYPE = XML ... )
COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE
IGNORE_UTF8_ERRORS = TRUE | FALSE
PRESERVE_SPACE = TRUE | FALSE
STRIP_OUTER_ELEMENT = TRUE | FALSE
DISABLE_SNOWFLAKE_DATA = TRUE | FALSE
DISABLE_AUTO_CONVERT = TRUE | FALSE
SKIP_BYTE_ORDER_MARK = TRUE | FALSE
TRIM_SPACE = TRUE | FALSE
NULL_IF = ( '<string>' [ , '<string>' ... ] )

copyOptions ::=
ON_ERROR = { CONTINUE | SKIP_FILE | SKIP_FILE_<num> | SKIP_FILE_<num>% | ABORT_STATEMENT }
SIZE_LIMIT = <num>
PURGE = TRUE | FALSE
RETURN_FAILED_ONLY = TRUE | FALSE
MATCH_BY_COLUMN_NAME = CASE_SENSITIVE | CASE_INSENSITIVE | NONE
ENFORCE_LENGTH = TRUE | FALSE
TRUNCATECOLUMNS = TRUE | FALSE
FORCE = TRUE | FALSE


Required Parameters¶

[namespace.]table_name

Specifies the name of the table into which data is loaded.

Namespace optionally specifies the database and/or schema for the table, in the form of database_name.schema_name or schema_name. It is optional if a database and schema are currently in use within the user session; otherwise, it is required.

FROM ...

Specifies the internal or external location where the files containing data to be loaded are staged:

 @[namespace.]int_stage_name[/path] Files are in the specified named internal stage. @[namespace.]ext_stage_name[/path] Files are in the specified named external stage. @[namespace.]%table_name[/path] Files are in the stage for the specified table. @~[/path] Files are in the stage for the current user. 's3://bucket[/path]' Files are in the specified external location (S3 bucket). Additional parameters might be required. For details, see Additional Cloud Provider Parameters (in this topic). . . Note that S3 buckets in non-public AWS regions, such as government regions, are not supported. 'azure://account.blob.core.windows.net/container[/path]' Files are in the specified external location (Azure container). Additional parameters might be required. For details, see Additional Cloud Provider Parameters (in this topic).

Where:

• namespace is the database and/or schema in which the internal or external stage resides, in the form of database_name.schema_name or schema_name. It is optional if a database and schema are currently in use within the user session; otherwise, it is required.

• path is an optional case-sensitive path for files in the cloud storage location (i.e. files have names that begin with a common string) that limits the set of files to load. Paths are alternatively called prefixes or folders by different cloud storage services.

Relative path modifiers such as /./ and /../ are interpreted literally because “paths” are literal prefixes for a name. For example:

-- S3 bucket
COPY INTO mytable FROM 's3://mybucket/./../a.csv';

-- Azure container
COPY INTO mytable FROM 'azure://myaccount.blob.core.windows.net/mycontainer/./../a.csv';


In these COPY statements, Snowflake looks for a file literally named ./../a.csv in the external location.

Note

If the internal or external stage or path name includes special characters, including spaces, enclose the FROM ... string in single quotes.

STORAGE_INTEGRATION = integration_name or . CREDENTIALS = ( cloud_specific_credentials )

For use in ad hoc COPY statements (statements that do not reference a named external stage). Specifies the security credentials for connecting to the cloud provider and accessing the private/protected storage container where the data files are staged.

Required only for loading from an external private/protected cloud storage location; not required for public buckets/containers

Amazon S3

STORAGE_INTEGRATION = integration_name

Specifies the name of the storage integration used to delegate authentication responsibility for external cloud storage to a Snowflake identity and access management (IAM) entity. For more details, see CREATE STORAGE INTEGRATION.

Note

We highly recommend the use of storage integrations. This option avoids the need to supply cloud storage credentials using the CREDENTIALS parameter when creating stages or loading data.

CREDENTIALS = ( AWS_KEY_ID = 'string' AWS_SECRET_KEY = 'string' [ AWS_TOKEN = 'string' ] )

Specifies the security credentials for connecting to AWS and accessing the private/protected S3 bucket where the files to load are staged. For more information, see Configuring Secure Access to Amazon S3.

The credentials you specify depend on whether you associated the Snowflake access permissions for the bucket with an AWS IAM (Identity & Access Management) user or role:

• IAM user: Temporary IAM credentials are required. Temporary (aka “scoped”) credentials are generated by AWS Security Token Service (STS) and consist of three components:

• AWS_KEY_ID

• AWS_SECRET_KEY

• AWS_TOKEN

All three are required to access a private/protected bucket. After a designated period of time, temporary credentials expire and can no longer be used. You must then generate a new set of valid temporary credentials.

Important

COPY commands contain complex syntax and sensitive information, such as credentials. In addition, they are executed frequently and are often stored in scripts or worksheets, which could lead to sensitive information being inadvertently exposed. The COPY command allows permanent (aka “long-term”) credentials to be used; however, for security reasons, do not use permanent credentials in COPY commands. Instead, use temporary credentials.

If you must use permanent credentials, use external stages, for which credentials are entered once and securely stored, minimizing the potential for exposure.

• IAM role: Omit the security credentials and access keys and, instead, identify the role using AWS_ROLE and specify the AWS role ARN (Amazon Resource Name).

STORAGE_INTEGRATION = integration_name

Specifies the name of the storage integration used to delegate authentication responsibility for external cloud storage to a Snowflake identity and access management (IAM) entity. For more details, see CREATE STORAGE INTEGRATION.

Microsoft Azure

STORAGE_INTEGRATION = integration_name

Specifies the name of the storage integration used to delegate authentication responsibility for external cloud storage to a Snowflake identity and access management (IAM) entity. For more details, see CREATE STORAGE INTEGRATION.

Note

We highly recommend the use of storage integrations. This option avoids the need to supply cloud storage credentials using the CREDENTIALS parameter when creating stages or loading data.

CREDENTIALS = ( AZURE_SAS_TOKEN = 'string' )

Specifies the SAS (shared access signature) token for connecting to Azure and accessing the private/protected container where the files containing data are staged. Credentials are generated by Azure.

ENCRYPTION = ( cloud_specific_encryption )

For use in ad hoc COPY statements (statements that do not reference a named external stage). Required only for loading from encrypted files; not required if files are unencrypted. Specifies the encryption settings used to decrypt encrypted files in the storage location.

Amazon S3

ENCRYPTION = ( [ TYPE = 'AWS_CSE' ] [ MASTER_KEY = '<string>' ] | [ TYPE = 'AWS_SSE_S3' ] | [ TYPE = 'AWS_SSE_KMS' [ KMS_KEY_ID = '<string>' ] | [ TYPE = NONE ] )

TYPE = ...

Specifies the encryption type used. Possible values are:

• AWS_CSE: Client-side encryption (requires a MASTER_KEY value). Currently, the client-side master key you provide can only be a symmetric key. Note that, when a MASTER_KEY value is provided, Snowflake assumes TYPE = AWS_CSE (i.e. when a MASTER_KEY value is provided, TYPE is not required).

• AWS_SSE_S3: Server-side encryption that requires no additional encryption settings.

• AWS_SSE_KMS: Server-side encryption that accepts an optional KMS_KEY_ID value.

For more information about the encryption types, see the AWS documentation for client-side encryption or server-side encryption.

MASTER_KEY = 'string' (applies to AWS_CSE encryption only)

Specifies the client-side master key used to encrypt the files in the bucket. The master key must be a 128-bit or 256-bit key in Base64-encoded form.

KMS_KEY_ID = 'string' (applies to AWS_SSE_KMS encryption only)

Optionally specifies the ID for the AWS KMS-managed key used to encrypt files unloaded into the bucket. If no value is provided, your default KMS key ID is used to encrypt files on unload.

ENCRYPTION = ( [ TYPE = 'GCS_SSE_KMS' ] [ KMS_KEY_ID = '<string>' ] | [ TYPE = NONE ] )

TYPE = ...

Specifies the encryption type used. Possible values are:

• GCS_SSE_KMS: Server-side encryption that accepts an optional KMS_KEY_ID value.

KMS_KEY_ID = 'string' (applies to GCS_SSE_KMS encryption only)

Optionally specifies the ID for the Cloud KMS-managed key that is used to encrypt files unloaded into the bucket. If no value is provided, your default KMS key ID set on the bucket is used to encrypt files on unload.

Note that this value is ignored for data loading. The load operation should succeed if the service account has sufficient permissions to decrypt data in the bucket.

Microsoft Azure

ENCRYPTION = ( [ TYPE = 'AZURE_CSE' | NONE ] [ MASTER_KEY = 'string' ] )

TYPE = ...

Specifies the encryption type used. Possible values are:

• AZURE_CSE: Client-side encryption (requires a MASTER_KEY value). For information, see the Client-side encryption information in the Microsoft Azure documentation.

• NONE: No encryption.

MASTER_KEY = 'string' (applies to AZURE_CSE encryption only)

Specifies the client-side master key used to decrypt files. The master key must be a 128-bit or 256-bit key in Base64-encoded form.

Transformation Parameters¶

( SELECT [alias.]$file_col_num[.element] [ , [alias.]$file_col_num[.element] ... ] FROM ... )

Specifies an explicit set of fields/columns (separated by commas) to load from the staged data files. The fields/columns are selected from the files using a standard SQL query (i.e. SELECT list), where:

 alias Specifies an optional alias. file_col_num Specifies the positional number of the field/column (in the file) that contains the data to be loaded (1 for the first field, 2 for the second field, etc.) element Specifies the path and element name of a repeating value in the data file (applies only to semi-structured data files).

The SELECT list defines a numbered set of field/columns in the data files you are loading from. The list must match the sequence of columns in the target table. You can use the optional ( col_name [ , col_name ... ] ) parameter to map the list to specific columns in the target table.

Note that the actual field/column order in the data files can be different from the column order in the target table. It is only important that the SELECT list maps fields/columns in the data files to the corresponding columns in the table.

Note

The SELECT statement used for transformations does not support all functions. For a complete list of the supported functions and more details about data loading transformations, including examples, see the usage notes in Transforming Data During a Load.

Also, data loading transformation only supports selecting data from user stages and named stages (internal or external).

( col_name [ , col_name ... ] )

Optionally specifies an explicit list of table columns (separated by commas) into which you want to insert data:

• The first column consumes the values produced from the first field/column extracted from the loaded files.

• The second column consumes the values produced from the second field/column extracted from the loaded files.

• And so on, in the order specified.

Columns cannot be repeated in this listing. Any columns excluded from this column list are populated by their default value (NULL, if not specified). However, excluded columns cannot have a sequence as their default value.

Optional Parameters¶

FILES = ( 'file_name' [ , 'file_name' ... ] )

Specifies a list of one or more files names (separated by commas) to be loaded. The files must already have been staged in either the Snowflake internal location or external location specified in the command.

The maximum number of files names that can be specified is 1000.

Note

For external stages only (Amazon S3, Google Cloud Storage, or Microsoft Azure), the file path is set by concatenating the URL in the stage definition and the list of resolved file names.

However, Snowflake doesn’t insert a separator implicitly between the path and file names. You must explicitly include a separator (/) either at the end of the URL in the stage definition or at the beginning of each file name specified in this parameter.

PATTERN = 'regex_pattern'

A regular expression pattern string, enclosed in single quotes, specifying the file names and/or paths to match.

Tip

For the best performance, try to avoid applying patterns that filter on a large number of files.

FILE_FORMAT = ( FORMAT_NAME = 'file_format_name' ) or . FILE_FORMAT = ( TYPE = CSV | JSON | AVRO | ORC | PARQUET | XML [ ... ] )

Specifies the format of the data files to load:

FORMAT_NAME = 'file_format_name'

Specifies an existing named file format to use for loading data into the table. The named file format determines the format type (CSV, JSON, etc.), as well as any other format options, for the data files. For more information, see CREATE FILE FORMAT.

TYPE = CSV | JSON | AVRO | ORC | PARQUET | XML [ ... ]

Specifies the type of files to load into the table. If a format type is specified, then additional format-specific options can be specified. For more details, see Format Type Options (in this topic).

Note

FORMAT_NAME and TYPE are mutually exclusive; specifying both in the same COPY command might result in unexpected behavior.

COPY_OPTIONS = ( ... )

Specifies one or more copy options for the loaded data. For more details, see Copy Options (in this topic).

VALIDATION_MODE = RETURN_n_ROWS | RETURN_ERRORS | RETURN_ALL_ERRORS

String (constant) that instructs the COPY command to validate the data files instead of loading them into the specified table; i.e. the COPY command tests the files for errors but does not load them. The command validates the data to be loaded and returns results based on the validation option specified:

Supported Values

Notes

RETURN_n_ROWS (e.g. RETURN_10_ROWS)

Validates the specified number of rows, if no errors are encountered; otherwise, fails at the first error encountered in the rows.

RETURN_ERRORS

Returns all errors (parsing, conversion, etc.) across all files specified in the COPY statement.

RETURN_ALL_ERRORS

Returns all errors across all files specified in the COPY statement, including files with errors that were partially loaded during an earlier load because the ON_ERROR copy option was set to CONTINUE during the load.

Note

• VALIDATION_MODE does not support COPY statements that transform data during a load. If the parameter is specified, the COPY statement returns an error.

• Use the VALIDATE table function to view all errors encountered during a previous load. Note that this function also does not support COPY statements that transform data during a load.

Format Type Options (formatTypeOptions)¶

Depending on the file format type specified (FILE_FORMAT = ( TYPE = ... )), you can include one or more of the following format-specific options (separated by blank spaces, commas, or new lines):

TYPE = CSV¶

COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE

String (constant) that specifies the current compression algorithm for the data files to be loaded. Snowflake uses this option to detect how already-compressed data files were compressed so that the compressed data in the files can be extracted for loading.

Supported Values

Notes

AUTO

Compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. If loading Brotli-compressed files, explicitly use BROTLI instead of AUTO.

GZIP

BZ2

BROTLI

ZSTD

Zstandard v0.8 (and higher) supported.

DEFLATE

Deflate-compressed files (with zlib header, RFC1950).

RAW_DEFLATE

Raw Deflate-compressed files (without header, RFC1951).

NONE

Data files to load have not been compressed.

RECORD_DELIMITER = 'character' | NONE

One or more singlebyte or multibyte characters that separate records in an input file.

Accepts common escape sequences, octal values (prefixed by \\), or hex values (prefixed by 0x). For example, for records delimited by the thorn (Þ) character, specify the octal (\\336) or hex (0xDE) value. Also accepts a value of NONE.

The specified delimiter must be a valid UTF-8 character and not a random sequence of bytes.

Multiple-character delimiters are also supported; however, the delimiter for RECORD_DELIMITER or FIELD_DELIMITER cannot be a substring of the delimiter for the other file format option (e.g. FIELD_DELIMITER = 'aa' RECORD_DELIMITER = 'aabb'). The delimiter is limited to a maximum of 20 characters. Do not specify characters used for other file format options such as ESCAPE or ESCAPE_UNENCLOSED_FIELD.

Default: New line character. Note that “new line” is logical such that \r\n will be understood as a new line for files on a Windows platform.

FIELD_DELIMITER = 'character' | NONE

One or more singlebyte or multibyte characters that separate fields in an input file.

Accepts common escape sequences, octal values (prefixed by \\), or hex values (prefixed by 0x). For example, for fields delimited by the thorn (Þ) character, specify the octal (\\336) or hex (0xDE) value. Also accepts a value of NONE.

The specified delimiter must be a valid UTF-8 character and not a random sequence of bytes.

Multiple-character delimiters are also supported; however, the delimiter for RECORD_DELIMITER or FIELD_DELIMITER cannot be a substring of the delimiter for the other file format option (e.g. FIELD_DELIMITER = 'aa' RECORD_DELIMITER = 'aabb'). The delimiter is limited to a maximum of 20 characters. Do not specify characters used for other file format options such as ESCAPE or ESCAPE_UNENCLOSED_FIELD.

Default: comma (,)

SKIP_HEADER = integer

Number of lines at the start of the file to skip.

Note that SKIP_HEADER does not use the RECORD_DELIMITER or FIELD_DELIMITER values to determine what a header line is; rather, it simply skips the specified number of CRLF (Carriage Return, Line Feed)-delimited lines in the file. RECORD_DELIMITER and FIELD_DELIMITER are then used to determine the rows of data to load.

Default: 0

DATE_FORMAT = 'string' | AUTO

String that defines the format of date values in the data files to be loaded. If a value is not specified or is AUTO, the value for the DATE_INPUT_FORMAT session parameter is used.

Default: AUTO

TIME_FORMAT = 'string' | AUTO

String that defines the format of time values in the data files to be loaded. If a value is not specified or is AUTO, the value for the TIME_INPUT_FORMAT session parameter is used.

Default: AUTO

TIMESTAMP_FORMAT = 'string' | AUTO

String that defines the format of timestamp values in the data files to be loaded. If a value is not specified or is AUTO, the value for the TIMESTAMP_INPUT_FORMAT session parameter is used.

Default: AUTO

BINARY_FORMAT = HEX | BASE64 | UTF8

String (constant) that defines the encoding format for binary input or output. This option only applies when loading data into binary columns in a table.

Default: HEX

ESCAPE = 'character' | NONE
Use

Definition

Single character string used as the escape character for field values. An escape character invokes an alternative interpretation on subsequent characters in a character sequence. You can use the ESCAPE character to interpret instances of the FIELD_DELIMITER, RECORD_DELIMITER, or FIELD_OPTIONALLY_ENCLOSED_BY characters in the data as literals. The escape character can also be used to escape instances of itself in the data.

Accepts common escape sequences, octal values, or hex values.

Specifies the escape character for enclosed fields. Specify the character used to enclose fields by setting FIELD_OPTIONALLY_ENCLOSED_BY.

Default

NONE

ESCAPE_UNENCLOSED_FIELD = 'character' | NONE
Use

Definition

Single character string used as the escape character for unenclosed field values only. An escape character invokes an alternative interpretation on subsequent characters in a character sequence. You can use the ESCAPE character to interpret instances of the FIELD_DELIMITER or RECORD_DELIMITER characters in the data as literals. The escape character can also be used to escape instances of itself in the data.

Accepts common escape sequences, octal values, or hex values.

Default

backslash (\\)

TRIM_SPACE = TRUE | FALSE

Boolean that specifies whether to remove white space from fields.

For example, if your external database software encloses fields in quotes, but inserts a leading space, Snowflake reads the leading space rather than the opening quotation character as the beginning of the field (i.e. the quotation marks are interpreted as part of the string of field data). Use this option to remove undesirable spaces during the data load.

As another example, if leading or trailing space surrounds quotes that enclose strings, you can remove the surrounding space using the TRIM_SPACE option and the quote character using the FIELD_OPTIONALLY_ENCLOSED_BY option. Note that any space within the quotes is preserved.

For example, assuming the field delimiter is | and FIELD_OPTIONALLY_ENCLOSED_BY = '"':

|"Hello world"|
|" Hello world "|
| "Hello world" |


becomes:

+---------------+
| C1            |
|----+----------|
| Hello world   |
|  Hello world  |
| Hello world   |
+---------------+


Default: FALSE

FIELD_OPTIONALLY_ENCLOSED_BY = 'character' | NONE

Character used to enclose strings. Value can be NONE, single quote character ('), or double quote character ("). To use the single quote character, use the octal or hex representation (0x27) or the double single-quoted escape ('').

When a field contains this character, escape it using the same character. For example, if the value is the double quote character and a field contains the string A "B" C, escape the double quotes as follows:

A ""B"" C

Default: NONE

NULL_IF = ( 'string1' [ , 'string2' ... ] )

String used to convert to and from SQL NULL. Snowflake replaces these strings in the data load source with SQL NULL. To specify more than one string, enclose the list of strings in parentheses and use commas to separate each value.

For example:

NULL_IF = ('\\N', 'NULL', 'NUL', '')

Note that this option can include empty strings.

Default: \\N (i.e. NULL, which assumes the ESCAPE_UNENCLOSED_FIELD value is \\ (default))

ERROR_ON_COLUMN_COUNT_MISMATCH = TRUE | FALSE

Boolean that specifies whether to generate a parsing error if the number of delimited columns (i.e. fields) in an input data file does not match the number of columns in the corresponding table.

If set to FALSE, an error is not generated and the load continues. If the file is successfully loaded:

• If the input file contains records with more fields than columns in the table, the matching fields are loaded in order of occurrence in the file and the remaining fields are not loaded.

• If the input file contains records with fewer fields than columns in the table, the non-matching columns in the table are loaded with NULL values.

This option assumes all the records within the input file are the same length (i.e. a file containing records of varying length return an error regardless of the value specified for this option).

Default: TRUE

Note

When transforming data during loading (i.e. using a query as the source for the COPY command), this option is ignored. There is no requirement for your data files to have the same number and ordering of columns as your target table.

REPLACE_INVALID_CHARACTERS = TRUE | FALSE

Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (). The copy option performs a one-to-one character replacement.

If set to TRUE, Snowflake replaces invalid UTF-8 characters with the Unicode replacement character.

If set to FALSE, the load operation produces an error when invalid UTF-8 character encoding is detected.

Default: FALSE

VALIDATE_UTF8 = TRUE | FALSE

Boolean that specifies whether to validate UTF-8 character encoding in string column data.

If set to TRUE, Snowflake validates UTF-8 character encoding in string column data. When invalid UTF-8 character encoding is detected, the COPY command produces an error.

Default: TRUE

Important

This option is provided only to ensure backward compatibility with earlier versions of Snowflake. You should not disable this option unless instructed by Snowflake Support.

EMPTY_FIELD_AS_NULL = TRUE | FALSE

Boolean that specifies whether to insert SQL NULL for empty fields in an input file, which are represented by two successive delimiters (e.g. ,,).

If set to FALSE, Snowflake attempts to cast an empty field to the corresponding column type. An empty string is inserted into columns of type STRING. For other column types, the COPY command produces an error.

Default: TRUE

SKIP_BYTE_ORDER_MARK = TRUE | FALSE

Boolean that specifies whether to skip the BOM (byte order mark), if present in a data file. A BOM is a character code at the beginning of a data file that defines the byte order and encoding form.

If set to FALSE, Snowflake recognizes any BOM in data files, which could result in the BOM either causing an error or being merged into the first column in the table.

Default: TRUE

ENCODING = 'string'

String (constant) that specifies the character set of the source data.

Character Set

ENCODING Value

Supported Languages

Notes

Big5

BIG5

EUC-JP

EUCJP

Japanese

EUC-KR

EUCKR

Korean

GB18030

GB18030

Chinese

IBM420

IBM420

Arabic

IBM424

IBM424

Hebrew

ISO-2022-CN

ISO2022CN

Simplified Chinese

ISO-2022-JP

ISO2022JP

Japanese

ISO-2022-KR

ISO2022KR

Korean

ISO-8859-1

ISO88591

Danish, Dutch, English, French, German, Italian, Norwegian, Portuguese, Swedish

ISO-8859-2

ISO88592

Czech, Hungarian, Polish, Romanian

ISO-8859-5

ISO88595

Russian

ISO-8859-6

ISO88596

Arabic

ISO-8859-7

ISO88597

Greek

ISO-8859-8

ISO88598

Hebrew

ISO-8859-9

ISO88599

Turkish

KOI8-R

KOI8R

Russian

Shift_JIS

SHIFTJIS

Japanese

UTF-8

UTF8

All languages

For loading data from delimited files (CSV, TSV, etc.), UTF-8 is the default. . . For loading data from all other supported file formats (JSON, Avro, etc.), as well as unloading data, UTF-8 is the only supported character set.

UTF-16

UTF16

All languages

UTF-16BE

UTF16BE

All languages

UTF-16LE

UTF16LE

All languages

UTF-32

UTF32

All languages

UTF-32BE

UTF32BE

All languages

UTF-32LE

UTF32LE

All languages

windows-1250

WINDOWS1250

Czech, Hungarian, Polish, Romanian

windows-1251

WINDOWS1251

Russian

windows-1252

WINDOWS1252

Danish, Dutch, English, French, German, Italian, Norwegian, Portuguese, Swedish

windows-1253

WINDOWS1253

Greek

windows-1254

WINDOWS1254

Turkish

windows-1255

WINDOWS1255

Hebrew

windows-1256

WINDOWS1256

Arabic

Default: UTF8

Note

Snowflake stores all data internally in the UTF-8 character set. The data is converted into UTF-8 before it is loaded into Snowflake. If the VALIDATE_UTF8 file format option is TRUE, Snowflake validates the UTF-8 character encoding in string column data after it is converted from its original character encoding.

TYPE = JSON¶

COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE

String (constant) that specifies the current compression algorithm for the data files to be loaded. Snowflake uses this option to detect how already-compressed data files were compressed so that the compressed data in the files can be extracted for loading.

Supported Values

Notes

AUTO

Compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. If loading Brotli-compressed files, explicitly use BROTLI instead of AUTO.

GZIP

BZ2

BROTLI

ZSTD

DEFLATE

Deflate-compressed files (with zlib header, RFC1950).

RAW_DEFLATE

Raw Deflate-compressed files (without header, RFC1951).

NONE

Default: AUTO

DATE_FORMAT = 'string' | AUTO

Applied only when loading JSON data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). Defines the format of date string values in the data files. If a value is not specified or is AUTO, the value for the DATE_INPUT_FORMAT parameter is used.

This file format option is currently a Preview Feature.

Default: AUTO

TIME_FORMAT = 'string' | AUTO

Applied only when loading JSON data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). Defines the format of time string values in the data files. If a value is not specified or is AUTO, the value for the TIME_INPUT_FORMAT parameter is used.

This file format option is currently a Preview Feature.

Default: AUTO

TIMESTAMP_FORMAT = string' | AUTO

Applied only when loading JSON data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). Defines the format of timestamp string values in the data files. If a value is not specified or is AUTO, the value for the TIMESTAMP_INPUT_FORMAT parameter is used.

This file format option is currently a Preview Feature.

Default: AUTO

BINARY_FORMAT = HEX | BASE64 | UTF8

Applied only when loading JSON data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). Defines the encoding format for binary string values in the data files. The option can be used when loading data into binary columns in a table.

This file format option is currently a Preview Feature.

Default: HEX

TRIM_SPACE = TRUE | FALSE

Applied only when loading JSON data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). Boolean that specifies whether to remove leading and trailing white space from strings.

For example, if your external database software encloses fields in quotes, but inserts a leading space, Snowflake reads the leading space rather than the opening quotation character as the beginning of the field (i.e. the quotation marks are interpreted as part of the string of field data). Set this option to TRUE to remove undesirable spaces during the data load.

This file format option is currently a Preview Feature.

Default: FALSE

NULL_IF = ( 'string1' [ , 'string2' , ... ] )

Applied only when loading JSON data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). String used to convert to and from SQL NULL. Snowflake replaces these strings in the data load source with SQL NULL. To specify more than one string, enclose the list of strings in parentheses and use commas to separate each value.

For example:

NULL_IF = ('\\N', 'NULL', 'NUL', '')

Note that this option can include empty strings.

This file format option is currently a Preview Feature.

Default: \\N (i.e. NULL, which assumes the ESCAPE_UNENCLOSED_FIELD value is \\)

ENABLE_OCTAL = TRUE | FALSE

Boolean that enables parsing of octal numbers.

Default: FALSE

ALLOW_DUPLICATE = TRUE | FALSE

Boolean that allows duplicate object field names (only the last one will be preserved).

Default: FALSE

STRIP_OUTER_ARRAY = TRUE | FALSE

Boolean that instructs the JSON parser to remove outer brackets [ ].

Default: FALSE

STRIP_NULL_VALUES = TRUE | FALSE

Boolean that instructs the JSON parser to remove object fields or array elements containing null values. For example, when set to TRUE:

Before

After

[null]

[]

[null,null,3]

[,,3]

{"a":null,"b":null,"c":123}

{"c":123}

{"a":[1,null,2],"b":{"x":null,"y":88}}

{"a":[1,,2],"b":{"y":88}}

Default: FALSE

REPLACE_INVALID_CHARACTERS = TRUE | FALSE

Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (). The copy option performs a one-to-one character replacement.

This file format option is currently a Preview Feature.

If set to TRUE, Snowflake replaces invalid UTF-8 characters with the Unicode replacement character.

If set to FALSE, the load operation produces an error when invalid UTF-8 character encoding is detected.

Default: FALSE

IGNORE_UTF8_ERRORS = TRUE | FALSE

Boolean that specifies whether UTF-8 encoding errors produce error conditions. If set to TRUE, any invalid UTF-8 sequences are silently replaced with the Unicode character U+FFFD (i.e. “replacement character”).

Note

This copy option removes all non-UTF-8 characters during the data load, but there is no guarantee of a one-to-one character replacement. We recommend using the REPLACE_INVALID_CHARACTERS copy option instead.

Default: FALSE

SKIP_BYTE_ORDER_MARK = TRUE | FALSE

Boolean that specifies whether to skip any BOM (byte order mark) present in an input file. A BOM is a character code at the beginning of a data file that defines the byte order and encoding form.

If set to FALSE, Snowflake recognizes any BOM in data files, which could result in the BOM either causing an error or being merged into the first column in the table.

Default: TRUE

TYPE = AVRO¶

COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE

String (constant) that specifies the current compression algorithm for the data files to be loaded. Snowflake uses this option to detect how already-compressed data files were compressed so that the compressed data in the files can be extracted for loading.

Supported Values

Notes

AUTO

Compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. If loading Brotli-compressed files, explicitly use BROTLI instead of AUTO.

GZIP

BZ2

BROTLI

ZSTD

DEFLATE

Deflate-compressed files (with zlib header, RFC1950).

RAW_DEFLATE

Raw Deflate-compressed files (without header, RFC1951).

NONE

Data files to load have not been compressed.

Default: AUTO

TRIM_SPACE = TRUE | FALSE

Applied only when loading Avro data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). Boolean that specifies whether to remove leading and trailing white space from strings.

For example, if your external database software encloses fields in quotes, but inserts a leading space, Snowflake reads the leading space rather than the opening quotation character as the beginning of the field (i.e. the quotation marks are interpreted as part of the string of field data). Set this option to TRUE to remove undesirable spaces during the data load.

This file format option is currently a Preview Feature.

Default: FALSE

NULL_IF = ( 'string1' [ , 'string2' , ... ] )

Applied only when loading Avro data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). String used to convert to and from SQL NULL. Snowflake replaces these strings in the data load source with SQL NULL. To specify more than one string, enclose the list of strings in parentheses and use commas to separate each value.

For example:

NULL_IF = ('\\N', 'NULL', 'NUL', '')

Note that this option can include empty strings.

This file format option is currently a Preview Feature.

Default: \\N (i.e. NULL, which assumes the ESCAPE_UNENCLOSED_FIELD value is \\)

TYPE = ORC¶

TRIM_SPACE = TRUE | FALSE

Applied only when loading ORC data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). Boolean that specifies whether to remove leading and trailing white space from strings.

For example, if your external database software encloses fields in quotes, but inserts a leading space, Snowflake reads the leading space rather than the opening quotation character as the beginning of the field (i.e. the quotation marks are interpreted as part of the string of field data). Set this option to TRUE to remove undesirable spaces during the data load.

This file format option is currently a Preview Feature.

Default: FALSE

NULL_IF = ( 'string1' [ , 'string2' , ... ] )

Applied only when loading ORC data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). String used to convert to and from SQL NULL. Snowflake replaces these strings in the data load source with SQL NULL. To specify more than one string, enclose the list of strings in parentheses and use commas to separate each value.

For example:

NULL_IF = ('\\N', 'NULL', 'NUL', '')

Note that this option can include empty strings.

This file format option is currently a Preview Feature.

Default: \\N (i.e. NULL, which assumes the ESCAPE_UNENCLOSED_FIELD value is \\)

TYPE = PARQUET¶

COMPRESSION = AUTO | SNAPPY | NONE

String (constant) that specifies the current compression algorithm for the data files to be loaded. Snowflake uses this option to detect how already-compressed data files were compressed so that the compressed data in the files can be extracted for loading.

Supported Values

Notes

AUTO

Compression algorithm detected automatically. Supports the following compression algorithms: Brotli, gzip, Lempel–Ziv–Oberhumer (LZO), LZ4, Snappy, or Zstandard v0.8 (and higher).

SNAPPY

NONE

Data files to load have not been compressed.

BINARY_AS_TEXT = TRUE | FALSE

Boolean that specifies whether to interpret columns with no defined logical data type as UTF-8 text. When set to FALSE, Snowflake interprets these columns as binary data.

Default: TRUE

TRIM_SPACE = TRUE | FALSE

Applied only when loading Parquet data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). Boolean that specifies whether to remove leading and trailing white space from strings.

For example, if your external database software encloses fields in quotes, but inserts a leading space, Snowflake reads the leading space rather than the opening quotation character as the beginning of the field (i.e. the quotation marks are interpreted as part of the string of field data). Set this option to TRUE to remove undesirable spaces during the data load.

This file format option is currently a Preview Feature.

Default: FALSE

NULL_IF = ( 'string1' [ , 'string2' , ... ] )

Applied only when loading Parquet data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). String used to convert to and from SQL NULL. Snowflake replaces these strings in the data load source with SQL NULL. To specify more than one string, enclose the list of strings in parentheses and use commas to separate each value.

For example:

NULL_IF = ('\\N', 'NULL', 'NUL', '')

Note that this option can include empty strings.

This file format option is currently a Preview Feature.

Default: \\N (i.e. NULL, which assumes the ESCAPE_UNENCLOSED_FIELD value is \\)

TYPE = XML¶

COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE

String (constant) that specifies the current compression algorithm for the data files to be loaded. Snowflake uses this option to detect how already-compressed data files were compressed so that the compressed data in the files can be extracted for loading.

Supported Values

Notes

AUTO

Compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. If loading Brotli-compressed files, explicitly use BROTLI instead of AUTO.

GZIP

BZ2

BROTLI

ZSTD

DEFLATE

Deflate-compressed files (with zlib header, RFC1950).

RAW_DEFLATE

Raw Deflate-compressed files (without header, RFC1951).

NONE

Data files to load have not been compressed.

Default: AUTO

IGNORE_UTF8_ERRORS = TRUE | FALSE

Boolean that specifies whether UTF-8 encoding errors produce error conditions. If set to TRUE, any invalid UTF-8 sequences are silently replaced with Unicode character U+FFFD (i.e. “replacement character”).

Default: FALSE

PRESERVE_SPACE = TRUE | FALSE

Boolean that specifies whether the XML parser preserves leading and trailing spaces in element content.

Default: FALSE

STRIP_OUTER_ELEMENT = TRUE | FALSE

Boolean that specifies whether the XML parser strips out the outer XML element, exposing 2nd level elements as separate documents.

Default: FALSE

DISABLE_SNOWFLAKE_DATA = TRUE | FALSE

Boolean that specifies whether the XML parser disables recognition of Snowflake semi-structured data tags.

Default: FALSE

DISABLE_AUTO_CONVERT = TRUE | FALSE

Boolean that specifies whether the XML parser disables automatic conversion of numeric and Boolean values from text to native representation.

Default: FALSE

SKIP_BYTE_ORDER_MARK = TRUE | FALSE

Boolean that specifies whether to skip any BOM (byte order mark) present in an input file. A BOM is a character code at the beginning of a data file that defines the byte order and encoding form.

If set to FALSE, Snowflake recognizes any BOM in data files, which could result in the BOM either causing an error or being merged into the first column in the table.

Default: TRUE

TRIM_SPACE = TRUE | FALSE

Applied only when loading XML data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). Boolean that specifies whether to remove leading and trailing white space from strings.

For example, if your external database software encloses fields in quotes, but inserts a leading space, Snowflake reads the leading space rather than the opening quotation character as the beginning of the field (i.e. the quotation marks are interpreted as part of the string of field data). Set this option to TRUE to remove undesirable spaces during the data load.

This file format option is currently a Preview Feature.

Default: FALSE

NULL_IF = ( 'string1' [ , 'string2' , ... ] )

Applied only when loading XML data into separate columns (i.e. using the MATCH_BY_COLUMN_NAME copy option or a COPY transformation). String used to convert to and from SQL NULL. Snowflake replaces these strings in the data load source with SQL NULL. To specify more than one string, enclose the list of strings in parentheses and use commas to separate each value.

For example:

NULL_IF = ('\\N', 'NULL', 'NUL', '')

Note that this option can include empty strings.

This file format option is currently a Preview Feature.

Default: \\N (i.e. NULL, which assumes the ESCAPE_UNENCLOSED_FIELD value is \\)

Copy Options (copyOptions)¶

You can specify one or more of the following copy options (separated by blank spaces, commas, or new lines):

ON_ERROR = CONTINUE | SKIP_FILE | SKIP_FILE_num | SKIP_FILE_num% | ABORT_STATEMENT
Definition

String (constant) that specifies the action to perform when an error is encountered while loading data from a file:

Values

Supported Values

Notes

CONTINUE

Continue loading the file. The COPY statement returns an error message for a maximum of one error encountered per data file. Note that the difference between the ROWS_PARSED and ROWS_LOADED column values represents the number of rows that include detected errors. However, each of these rows could include multiple errors. To view all errors in the data files, use the VALIDATION_MODE parameter or query the VALIDATE function.

SKIP_FILE

Skip file if any errors encountered in the file.

SKIP_FILE_num (e.g. SKIP_FILE_10)

Skip file when the number of errors in the file is equal to or exceeds the specified number.

SKIP_FILE_num% (e.g. SKIP_FILE_10%)

Skip file when the percentage of errors in the file exceeds the specified percentage.

ABORT_STATEMENT

Abort the load operation if any error is encountered in a data file. Note that the load operation is not aborted if the data file cannot be found (e.g. because it does not exist or cannot be accessed).

Note

The following limitations currently apply:

• All ON_ERROR values work as expected when loading structured delimited data files (CSV, TSV, etc.) for both parsing and transformation errors.

• Semi-structured data files (JSON, Avro, ORC, Parquet, or XML) currently do not support the same behavior semantics as structured data files for the following ON_ERROR values: CONTINUE, SKIP_FILE_num, or SKIP_FILE_num% due to the design of those formats.

• Parquet and ORC data only. When ON_ERROR is set to CONTINUE, SKIP_FILE_num, or SKIP_FILE_num%, any parsing error results in the data file being skipped. Any conversion or transformation errors use the default behavior of COPY (ABORT_STATEMENT) or Snowpipe (SKIP_FILE) regardless of selected option value.

• JSON, XML, and Avro data only. When ON_ERROR is set to CONTINUE, SKIP_FILE_num, or SKIP_FILE_num%, the records up to the parsing error location are loaded while the remainder of the data file will be skipped. Any conversion or transformation errors use the default behavior of COPY (ABORT_STATEMENT) or Snowpipe (SKIP_FILE) regardless of selected option value.

Default

ABORT_STATEMENT

Snowpipe

SKIP_FILE

SIZE_LIMIT = num
Definition

Number (> 0) that specifies the maximum size (in bytes) of data to be loaded for a given COPY statement. When the threshold is exceeded, the COPY operation discontinues loading files. This option is commonly used to load a common group of files using multiple COPY statements. For each statement, the data load continues until the specified SIZE_LIMIT is exceeded, before moving on to the next statement.

For example, suppose a set of files in a stage path were each 10 MB in size. If multiple COPY statements set SIZE_LIMIT to 25000000 (25 MB), each would load 3 files. That is, each COPY operation would discontinue after the SIZE_LIMIT threshold was exceeded.

Note that at least one file is loaded regardless of the value specified for SIZE_LIMIT:code: unless there is no file to be loaded.

Default

null (no size limit)

PURGE = TRUE | FALSE
Definition

Boolean that specifies whether to remove the data files from the stage automatically after the data is loaded successfully.

If this option is set to TRUE, note that a best effort is made to remove successfully loaded data files. If the purge operation fails for any reason, no error is returned currently. We recommend that you list staged files periodically (using LIST) and manually remove successfully loaded files, if any exist.

Default

FALSE

RETURN_FAILED_ONLY = TRUE | FALSE
Definition

Boolean that specifies whether to return only files that have failed to load in the statement result.

Default

FALSE

MATCH_BY_COLUMN_NAME = CASE_SENSITIVE | CASE_INSENSITIVE | NONE
Definition

String that specifies whether to load semi-structured data into columns in the target table that match corresponding columns represented in the data.

This copy option is supported for the following data formats:

• JSON

• Avro

• ORC

• Parquet

For a column to match, the following criteria must be true:

• The column represented in the data must have the exact same name as the column in the table. The copy option supports case sensitivity for column names. Column order does not matter.

• The column in the table must have a data type that is compatible with the values in the column represented in the data. For example, string, number, and Boolean values can all be loaded into a variant column.

Values
CASE_SENSITIVE | CASE_INSENSITIVE

Load semi-structured data into columns in the target table that match corresponding columns represented in the data. Column names are either case-sensitive (CASE_SENSITIVE) or case-insensitive (CASE_INSENSITIVE).

The COPY operation verifies that at least one column in the target table matches a column represented in the data files. If a match is found, the values in the data files are loaded into the column or columns. If no match is found, a set of NULL values for each record in the files is loaded into the table.

Note

• If additional non-matching columns are present in the data files, the values in these columns are not loaded.

• If additional non-matching columns are present in the target table, the COPY operation inserts NULL values into these columns. These columns must support NULL values.

• The COPY statement does not allow specifying a query to further transform the data during the load (i.e. COPY transformation).

NONE

The COPY operation loads the semi-structured data into a variant column or, if a query is included in the COPY statement, transforms the data.

Default

NONE

Note

The following limitations currently apply:

• MATCH_BY_COLUMN_NAME cannot be used with the VALIDATION_MODE parameter in a COPY statement to validate the staged data rather than load it into the target table.

• Parquet data only. When MATCH_BY_COLUMN_NAME is set to CASE_SENSITIVE or CASE_INSENSITIVE, an empty column value (e.g. "col1": "") produces an error.

ENFORCE_LENGTH = TRUE | FALSE
Definition

Alternative syntax for TRUNCATECOLUMNS with reverse logic (for compatibility with other systems)

Boolean that specifies whether to truncate text strings that exceed the target column length:

• If TRUE, the COPY statement produces an error if a loaded string exceeds the target column length.

• If FALSE, strings are automatically truncated to the target column length.

Currently, this copy option supports CSV data only.

Default

TRUE

Note

• If the length of the target string column is set to the maximum (e.g. VARCHAR (16777216)), an incoming string cannot exceed this length; otherwise, the COPY command produces an error.

• This parameter is functionally equivalent to TRUNCATECOLUMNS, but has the opposite behavior. It is provided for compatibility with other databases. It is only necessary to include one of these two parameters in a COPY statement to produce the desired output.

TRUNCATECOLUMNS = TRUE | FALSE
Definition

Alternative syntax for ENFORCE_LENGTH with reverse logic (for compatibility with other systems)

Boolean that specifies whether to truncate text strings that exceed the target column length:

• If TRUE, strings are automatically truncated to the target column length.

• If FALSE, the COPY statement produces an error if a loaded string exceeds the target column length.

Currently, this copy option supports CSV data only.

Default

FALSE

Note

• If the length of the target string column is set to the maximum (e.g. VARCHAR (16777216)), an incoming string cannot exceed this length; otherwise, the COPY command produces an error.

• This parameter is functionally equivalent to ENFORCE_LENGTH, but has the opposite behavior. It is provided for compatibility with other databases. It is only necessary to include one of these two parameters in a COPY statement to produce the desired output.

FORCE = TRUE | FALSE
Definition

Boolean that specifies to load all files, regardless of whether they’ve been loaded previously and have not changed since they were loaded. Note that this option reloads files, potentially duplicating data in a table.

Default

FALSE

LOAD_UNCERTAIN_FILES = TRUE | FALSE
Definition

Boolean that specifies to load files for which the load status is unknown. The COPY command skips these files by default.

The load status is unknown if all of the following conditions are true:

• The file’s LAST_MODIFIED date (i.e. date when the file was staged) is older than 64 days.

• The initial set of data was loaded into the table more than 64 days earlier.

• If the file was already loaded successfully into the table, this event occurred more than 64 days earlier.

To force the COPY command to load all files regardless of whether the load status is known, use the FORCE option instead.

Default

FALSE

Usage Notes¶

• If loading into a table from the table’s own stage, the FROM clause is not required and can be omitted.

• When transforming data during loading (i.e. using a query as the source for the COPY command):

• Selecting data from files is supported only by named stages (internal or external) and user stages. It is not supported by table stages.

• Both CSV and semi-structured file types are supported; however, even when loading semi-structured data (e.g. JSON), you should set CSV as the file format type (default value). You can use the corresponding file format (e.g. JSON), but any error in the transformation will stop the COPY operation, even if you set the ON_ERROR option to continue or skip the file.

In addition, set the file format option FIELD_DELIMITER = NONE.

• To transform JSON data during a load operation, you must structure the data files in NDJSON (“Newline Delimited JSON”) standard format; otherwise, you might encounter the following error:

Error parsing JSON: more than one document in the input

• The DISTINCT keyword in SELECT statements is not fully supported. Specifying the keyword can lead to inconsistent or unexpected ON_ERROR copy option behavior.

• Loading from Google Cloud Storage only: The list of objects returned for an external stage might include one or more “directory blobs”; essentially, paths that end in a forward slash character (/), e.g.:

LIST @my_gcs_stage;

+---------------------------------------+------+----------------------------------+-------------------------------+
| name                                  | size | md5                              | last_modified                 |
|---------------------------------------+------+----------------------------------+-------------------------------|
| my_gcs_stage/load/                    |  12  | 12348f18bcb35e7b6b628ca12345678c | Mon, 11 Sep 2019 16:57:43 GMT |
| my_gcs_stage/load/data_0_0_0.csv.gz   |  147 | 9765daba007a643bdff4eae10d43218y | Mon, 11 Sep 2019 18:13:07 GMT |
+---------------------------------------+------+----------------------------------+-------------------------------+


These blobs are listed when directories are created in the Google Cloud Platform Console rather than using any other tool provided by Google.

COPY statements that reference a stage can fail when the object list includes directory blobs. To avoid errors, we recommend using file pattern matching to identify the files for inclusion (i.e. the PATTERN clause) when the file list for a stage includes directory blobs. For an example, see Loading Using Pattern Matching (in this topic). Alternatively, set ON_ERROR = SKIP_FILE in the COPY statement.

• If referencing a file format in the current namespace (the database and schema active in the current user session), you can omit the single quotes around the format identifier.

• STORAGE_INTEGRATION, CREDENTIALS, and ENCRYPTION only apply if you are loading directly from a private/protected storage location:

• If you are loading from a public bucket, secure access is not required.

• If you are loading from a named external stage, the stage provides all the credential information required for accessing the bucket.

• If you encounter errors while running the COPY command, after the command completes, you can validate the files that produced the errors using the VALIDATE table function.

Note

The VALIDATE function only returns output for COPY commands used to perform standard data loading; it does not support COPY commands that perform transformations during data loading (e.g. loading a subset of data columns or reordering data columns).

• Unless you explicitly specify FORCE = TRUE as one of the copy options, the command ignores staged data files that were already loaded into the table. To reload the data, you must either specify FORCE = TRUE or modify the file and stage it again, which generates a new checksum.

• The COPY command does not validate data type conversions for Parquet files.

Output¶

The command returns the following columns:

Column Name

Data Type

Description

FILE

TEXT

Name of source file and relative path to the file

STATUS

TEXT

ROWS_PARSED

NUMBER

Number of rows parsed from the source file

NUMBER

Number of rows loaded from the source file

ERROR_LIMIT

NUMBER

If the number of errors reaches this limit, then abort

ERRORS_SEEN

NUMBER

Number of error rows in the source file

FIRST_ERROR

TEXT

First error of the source file

FIRST_ERROR_LINE

NUMBER

Line number of the first error

FIRST_ERROR_CHARACTER

NUMBER

Position of the first error character

FIRST_ERROR_COLUMN_NAME

TEXT

Column name of the first error

Examples¶

Note

These examples assume the files were copied to the stage earlier using the PUT command.

Load files from a named internal stage into a table:

COPY INTO mytable
FROM @my_int_stage;


Load files from a table’s stage into the table:

COPY INTO mytable
FILE_FORMAT = (TYPE = CSV);


Note

When copying data from files in a table location, the FROM clause can be omitted because Snowflake automatically checks for files in the table’s location.

Load files from the user’s personal stage into a table:

COPY INTO mytable from @~/staged
FILE_FORMAT = (FORMAT_NAME = 'mycsv');


Load files from a named external stage that you created previously using the CREATE STAGE command. The named external stage references an external location (Amazon S3, Google Cloud Storage, or Microsoft Azure) and includes all the credentials and other details required for accessing the location:

COPY INTO mycsvtable


The following example loads all files prefixed with data/files from a storage location (Amazon S3, Google Cloud Storage, or Microsoft Azure) using a named my_csv_format file format:

Amazon S3

Access the referenced S3 bucket using a referenced storage integration named myint:

COPY INTO mytable
FROM s3://mybucket/data/files
STORAGE_INTEGRATION = myint
ENCRYPTION=(MASTER_KEY = 'eSxX0jzYfIamtnBKOEOwq80Au6NbSgPH5r4BDDwOaO8=')
FILE_FORMAT = (FORMAT_NAME = my_csv_format);


Access the referenced S3 bucket using supplied credentials:

COPY INTO mytable
FROM s3://mybucket/data/files
CREDENTIALS=(AWS_KEY_ID='$AWS_ACCESS_KEY_ID' AWS_SECRET_KEY='$AWS_SECRET_ACCESS_KEY')
ENCRYPTION=(MASTER_KEY = 'eSxX0jzYfIamtnBKOEOwq80Au6NbSgPH5r4BDDwOaO8=')
FILE_FORMAT = (FORMAT_NAME = my_csv_format);


Access the referenced GCS bucket using a referenced storage integration named myint:

COPY INTO mytable
FROM gcs://mybucket/data/files
STORAGE_INTEGRATION = myint
FILE_FORMAT = (FORMAT_NAME = my_csv_format);


Microsoft Azure

Access the referenced container using a referenced storage integration named myint:

COPY INTO mytable
FROM azure://myaccount.blob.core.windows.net/data/files
STORAGE_INTEGRATION = myint
ENCRYPTION=(TYPE='AZURE_CSE' MASTER_KEY = 'kPxX0jzYfIamtnJEUTHwq80Au6NbSgPH5r4BDDwOaO8=')
FILE_FORMAT = (FORMAT_NAME = my_csv_format);


Access the referenced container using supplied credentials:

COPY INTO mytable
FROM azure://myaccount.blob.core.windows.net/mycontainer/data/files
CREDENTIALS=(AZURE_SAS_TOKEN='?sv=2016-05-31&ss=b&srt=sco&sp=rwdl&se=2018-06-27T10:05:50Z&st=2017-06-27T02:05:50Z&spr=https,http&sig=bgqQwoXwxzuD2GJfagRg7VOS8hzNr3QLT7rhS8OFRLQ%3D')
ENCRYPTION=(TYPE='AZURE_CSE' MASTER_KEY = 'kPxX0jzYfIamtnJEUTHwq80Au6NbSgPH5r4BDDwOaO8=')
FILE_FORMAT = (FORMAT_NAME = my_csv_format);


Load files from a table’s stage into the table, using pattern matching to only load data from compressed CSV files in any path:

COPY INTO mytable
FILE_FORMAT = (TYPE = 'CSV')
PATTERN='.*/.*/.*[.]csv[.]gz';


Where .* is interpreted as “zero or more occurrences of any character.” The square brackets escape the period character (.) that precedes a file extension.

Load files from a table stage into the table using pattern matching to only load uncompressed CSV files whose names include the string sales:

COPY INTO mytable
FILE_FORMAT = (FORMAT_NAME = myformat)
PATTERN='.*sales.*[.]csv';


The following example loads JSON data into a table with a single column of type VARIANT.

The staged JSON array comprises three objects separated by new lines:

[{
"location": {
"city": "Lexington",
"zip": "40503",
},
"sq__ft": "1000",
"sale_date": "4-25-16",
"price": "75836"
},
{
"location": {
"city": "Belmont",
"zip": "02478",
},
"sq__ft": "1103",
"sale_date": "6-18-16",
"price": "92567"
}
{
"location": {
"city": "Winchester",
"zip": "01890",
},
"sq__ft": "1122",
"sale_date": "1-31-16",
"price": "89921"
}]

/* Create a JSON file format that strips the outer array. */

CREATE OR REPLACE FILE FORMAT json_format
TYPE = 'JSON'
STRIP_OUTER_ARRAY = TRUE;

/* Create an internal stage that references the JSON file format. */

CREATE OR REPLACE STAGE mystage
FILE_FORMAT = json_format;

/* Stage the JSON file. */

PUT file:///tmp/sales.json @mystage AUTO_COMPRESS=TRUE;

/* Create a target table for the JSON data. */

CREATE OR REPLACE TABLE house_sales (src VARIANT);

/* Copy the JSON data into the target table. */

COPY INTO house_sales
FROM @mystage/sales.json.gz;

SELECT * FROM house_sales;

+---------------------------+
| SRC                       |
|---------------------------|
| {                         |
|   "location": {           |
|     "city": "Lexington",  |
|     "zip": "40503"        |
|   },                      |
|   "price": "75836",       |
|   "sale_date": "4-25-16", |
|   "sq__ft": "1000",       |
|   "type": "Residential"   |
| }                         |
| {                         |
|   "location": {           |
|     "city": "Belmont",    |
|     "zip": "02478"        |
|   },                      |
|   "price": "92567",       |
|   "sale_date": "6-18-16", |
|   "sq__ft": "1103",       |
|   "type": "Residential"   |
| }                         |
| {                         |
|   "location": {           |
|     "city": "Winchester", |
|     "zip": "01890"        |
|   },                      |
|   "price": "89921",       |
|   "sale_date": "1-31-16", |
|   "sq__ft": "1122",       |
|   "type": "Condo"         |
| }                         |
+---------------------------+


Add FORCE = TRUE to a COPY command to reload (duplicate) data from a set of staged data files that have not changed (i.e. have the same checksum as when they were first loaded).

In the following example, the first command loads the specified files and the second command forces the same files to be loaded again (producing duplicate rows), even though the contents of the files have not changed:

COPY INTO load1 FROM @%load1/data1/
FILES=('test1.csv', 'test2.csv')

FILES=('test1.csv', 'test2.csv')
FORCE=TRUE;


• Set PURGE=TRUE for the table to specify that all files successfully loaded into the table are purged after loading:

ALTER TABLE mytable SET STAGE_COPY_OPTIONS = (PURGE = TRUE);

COPY INTO mytable;

• You can also override any of the copy options directly in the COPY command:

COPY INTO mytable PURGE = TRUE;


Validating Staged Files¶

• Run the COPY command in validation mode and see all errors:

COPY INTO mytable VALIDATION_MODE = 'RETURN_ERRORS';

+-------------------------------------------------------------------------------------------------------------------------------+------------------------+------+-----------+-------------+----------+--------+-----------+----------------------+------------+----------------+
|                                                         ERROR                                                                 |            FILE        | LINE | CHARACTER | BYTE_OFFSET | CATEGORY |  CODE  | SQL_STATE |   COLUMN_NAME        | ROW_NUMBER | ROW_START_LINE |
+-------------------------------------------------------------------------------------------------------------------------------+------------------------+------+-----------+-------------+----------+--------+-----------+----------------------+------------+----------------+
| Field delimiter ',' found while expecting record delimiter '\n'                                                               | @MYTABLE/data1.csv.gz  | 3    | 21        | 76          | parsing  | 100016 | 22000     | "MYTABLE"["QUOTA":3] | 3          | 3              |
| NULL result in a non-nullable column. Use quotes if an empty field should be interpreted as an empty string instead of a null | @MYTABLE/data3.csv.gz  | 3    | 2         | 62          | parsing  | 100088 | 22000     | "MYTABLE"["NAME":1]  | 3          | 3              |
| End of record reached while expected to parse column '"MYTABLE"["QUOTA":3]'                                                   | @MYTABLE/data3.csv.gz  | 4    | 20        | 96          | parsing  | 100068 | 22000     | "MYTABLE"["QUOTA":3] | 4          | 4              |
+-------------------------------------------------------------------------------------------------------------------------------+------------------------+------+-----------+-------------+----------+--------+-----------+----------------------+------------+----------------+

• Run the COPY command in validation mode for a specified number of rows. In this example, the first run encounters no errors in the specified number of rows and completes successfully, displaying the information as it will appear when loaded into the table. The second run encounters an error in the specified number of rows and fails with the error encountered:

COPY INTO mytable VALIDATION_MODE = 'RETURN_2_ROWS';

+--------------------+----------+-------+
|        NAME        |    ID    | QUOTA |
+--------------------+----------+-------+
| Joe Smith          |  456111  | 0     |
| Tom Jones          |  111111  | 3400  |
+--------------------+----------+-------+

COPY INTO mytable VALIDATION_MODE = 'RETURN_3_ROWS';

FAILURE: NULL result in a non-nullable column. Use quotes if an empty field should be interpreted as an empty string instead of a null
File '@MYTABLE/data3.csv.gz', line 3, character 2
Row 3, column "MYTABLE"["NAME":1]