CREATE [ OR REPLACE ] PIPE [ IF NOT EXISTS ] <name>
[ AUTO_INGEST = [ TRUE | FALSE ] ]
[ ERROR_INTEGRATION = <integration_name> ]
[ AWS_SNS_TOPIC = '<string>' ]
[ INTEGRATION = '<string>' ]
[ COMMENT = '<string_literal>' ]
Identifier for the pipe; must be unique for the schema in which the pipe is created.
The identifier must start with an alphabetic character and cannot contain spaces or special characters unless the entire identifier string is enclosed in double quotes (e.g.
"My object"). Identifiers enclosed in double quotes are also case-sensitive.
For more details, see Identifier requirements.
We currently do not recommend using the following functions in the
copy_statement for Snowpipe:
It is recommended to query METADATA$START_SCAN_TIME instead, which provides a more accurate representation of record loading.
AUTO_INGEST = TRUE | FALSE
Specifies whether to automatically load data files from the specified external stage and optional path when event notifications are received from a configured message service:
TRUEenables automatic data loading.
Snowpipe supports loading from external stages (Amazon S3, Google Cloud Storage, or Microsoft Azure).
FALSEdisables automatic data loading. You must make calls to the Snowpipe REST API endpoints to load data files.
Snowpipe supports loading from internal stages (i.e. Snowflake named stages or table stages, but not user stages) or external stage (Amazon S3, Google Cloud Storage, or Microsoft Azure).
ERROR_INTEGRATION = 'integration_name'
Required only when configuring Snowpipe to send error notifications to a cloud messaging service.
Specifies the name of the notification integration used to communicate with the messaging service. For more information, see Snowpipe error notifications.
AWS_SNS_TOPIC = 'string'
Required only when configuring AUTO_INGEST for Amazon S3 stages using SNS.
Specifies the Amazon Resource Name (ARN) for the SNS topic for your S3 bucket. The CREATE PIPE statement subscribes the Amazon Simple Queue Service (SQS) queue to the specified SNS topic. The pipe copies files to the ingest queue triggered by event notifications via the SNS topic. For more information, see Automating Snowpipe for Amazon S3.
INTEGRATION = 'string'
Required only when configuring AUTO_INGEST for Google Cloud Storage or Microsoft Azure stages.
Specifies the existing notification integration used to access the storage queue. For more information, see:
The integration name must be typed in all uppercase.
COMMENT = 'string_literal'
Specifies a comment for the pipe.
Default: No value
This SQL command requires the following minimum permissions:
Stage in the pipe definition
External stages only
Stage in the pipe definition
Internal stages only
Table in the pipe defintion
SQL operations on schema objects also require the USAGE privilege on the database and schema that contain the object.
All COPY INTO <table> copy options are supported except for the following:
FILES = ( 'file_name1' [ , 'file_name2', ... ] )
ON_ERROR = ABORT_STATEMENT
SIZE_LIMIT = num
PURGE = TRUE | FALSE(i.e. automatic purging while loading)
FORCE = TRUE | FALSE
Note that you can manually remove files from an internal (i.e. Snowflake) stage (after they’ve been loaded) using the REMOVE command.
RETURN_FAILED_ONLY = TRUE | FALSE
VALIDATION_MODE = RETURN_n_ROWS | RETURN_ERRORS | RETURN_ALL_ERRORS
PATTERN = 'regex_pattern'copy option filters the set of files to load using a regular expression. Pattern matching behaves as follows depending on the AUTO_INGEST parameter value:
AUTO_INGEST = TRUE: The regular expression filters the list of files in the stage and optional path (i.e. cloud storage location) in the COPY INTO <table> statement.
:AUTO_INGEST = FALSE: The regular expression filters the list of files submitted in calls to the Snowpipe REST API
Note that Snowpipe trims any path segments in the stage definition from the storage location and applies the regular expression to any remaining path segments and filenames. To view the stage definition, execute the DESCRIBE STAGE command for the stage. The URL property consists of the bucket or container name and zero or more path segments. For example, if the FROM location in a COPY INTO <table> statement is
@s/path1/path2/and the URL value for stage
s3://mybucket/path1/, then Snowpipe trims
/path1/from the storage location in the FROM clause and applies the regular expression to
path2/plus the filenames in the path.
Snowflake recommends that you enable cloud event filtering for Snowpipe to reduce costs, event noise, and latency. Only use the PATTERN option when your cloud provider’s event filtering feature is not sufficient. For more information about configuring event filtering for each cloud provider, see the following pages:
Microsoft Azure Event Grid: Understand event filtering for Event Grid subscriptions
Google Cloud Pub/Sub: Filtering messages
Using a query as the source for the COPY statement for column reordering, column omission, and casts (i.e. transforming data during a load) is supported. For usage examples, see Transforming data during a load. Note that only simple SELECT statements are supported. Filtering using a WHERE clause is not supported.
Pipe definitions are not dynamic (i.e. a pipe is not automatically updated if the underlying stage or table changes, such as renaming or dropping the stage/table). Instead, you must create a new pipe and submit this pipe name in future Snowpipe REST API calls.
Customers should ensure that no personal data (other than for a User object), sensitive data, export-controlled data, or other regulated data is entered as metadata when using the Snowflake service. For more information, see Metadata Fields in Snowflake.
CREATE OR REPLACE <object> statements are atomic. That is, when an object is replaced, the old object is deleted and the new object is created in a single transaction.
If you recreate a pipe (using the CREATE OR REPLACE PIPE syntax), see Recreating pipes for related considerations and best practices.
Create a pipe in the current schema that loads all the data from files staged in the
mystage stage into
create pipe mypipe as copy into mytable from @mystage;
Same as the previous example, but with a data transformation. Only load data from the 4th and 5th columns in the staged files, in reverse order:
create pipe mypipe2 as copy into mytable(C1, C2) from (select $5, $4 from @mystage);
Create a pipe in the current schema for automatic loading of data using event notifications received from a messaging service:
Amazon S3create pipe mypipe_s3 auto_ingest = true aws_sns_topic = 'arn:aws:sns:us-west-2:001234567890:s3_mybucket' as copy into snowpipe_db.public.mytable from @snowpipe_db.public.mystage file_format = (type = 'JSON');
Google Cloud Storagecreate pipe mypipe_gcs auto_ingest = true integration = 'MYINT' as copy into snowpipe_db.public.mytable from @snowpipe_db.public.mystage file_format = (type = 'JSON');
Microsoft Azurecreate pipe mypipe_azure auto_ingest = true integration = 'MYINT' as copy into snowpipe_db.public.mytable from @snowpipe_db.public.mystage file_format = (type = 'JSON');