Stage, Pipe, and Load History Replication¶
This topic provides information about replication support for data pipeline objects and related metadata, including stages, storage integrations, pipes, and load history. You can replicate these objects to configure failover for ingest and ETL pipelines across regions and across cloud platforms.
Before you get started, we recommend that you have familiarity with Snowflake support for replication and failover/failback. For more information, see Introduction to Replication and Failover.
Requirements¶
Important
If a database in a target account that you plan to use already contains stages and pipes, we recommend that you contact support before enabling replication. When a replication or failover group in your source account includes that database, any pre-existing stages and pipes are dropped from the database.
Before you can replicate data pipeline objects, you must enable ETL replication
for the replication or failover group or at the account level by setting the ENABLE_ETL_REPLICATION parameter to TRUE
.
To replicate any external stages that use a storage integration,
you must also configure your replication or failover group to replicate STORAGE INTEGRATIONS
.
You can use an ALTER REPLICATION GROUP or ALTER FAILOVER GROUP statement to modify these properties for an existing group.
Note
If you add INTEGRATIONS
to the OBJECT_TYPES
list in your ALTER statement,
you should include any other existing objects in the list to avoid dropping those objects in the target account.
The same applies if you add STORAGE INTEGRATIONS
to the ALLOWED_INTEGRATION_TYPES
list.
For example:
ALTER FAILOVER GROUP my_failover_group SET ENABLE_ETL_REPLICATION = TRUE;
ALTER FAILOVER GROUP my_failover_group SET
OBJECT_TYPES = ROLES, INTEGRATIONS
ALLOWED_INTEGRATION_TYPES = API INTEGRATIONS, STORAGE INTEGRATIONS;
Replication and Stages¶
This section describes the current level of replication functionality that Snowflake supports for different types of stages.
Replication of Internal Stages¶
The following table describes how replication works for each type of internal stage.
Type |
Description of Replication Support |
---|---|
Table stage |
Empty table stages are created for tables in a replicated database. Files on table stages are not replicated. |
User stage |
User and user stage replication requires Business Critical Edition (or higher). Empty user stages are created for replicated users. Files on user stages are not replicated. |
Named stage |
Named internal stages are replicated when you replicate a database. The stage must have a directory table enabled on it in order to replicate the files on the stage. |
Replication of External Stages¶
Note
Snowflake does not replicate files on an external stage. The cloud storage URL points to the same location for external stages in primary and secondary databases.
The following table describes how replication works for each type of external stage.
Type |
Description of Replication Support |
---|---|
Named stage with no credentials (public storage location) |
Named external stages are replicated when you replicate a database. The files on an external stage are not replicated. |
Named stage with credentials (private storage location) |
Replicated stages include the cloud provider credentials, such as secret keys or access tokens. |
Named stage with storage integration (private storage location) |
Storage integration replication requires Business Critical Edition (or higher). The replication or failover group must include You must also take action to configure the trust relationships for your cloud storage in the target accounts. For more information, see Configure Cloud Storage Access for Secondary Storage Integrations. |
Note
To associate a secondary stage or pipe with a different cloud storage location than the one associated with the primary object, contact the support team. For example, you might choose a location in another region.
Considerations¶
Note
A known issue in the preview version of this feature results in duplicate loading of files that are older than 64 days. As a workaround, you should wait 24 hours after replicating a table before you fail over.
The following constraints apply to stage objects:
Snowflake currently supports stage replication as part of group-based replication (replication and failover groups). Stage replication is not supported for database replication.
You can replicate an external stage. However, the files on an external stage are not replicated.
You can replicate an internal stage. To replicate the files on an internal stage, you must enable a directory table on the stage. Snowflake replicates only the files that are mapped by the directory table.
When you replicate an internal stage with a directory table, you cannot disable the directory table on the primary or secondary stage. The directory table contains critical information about replicated files and files loaded using a COPY statement.
A refresh operation will fail if the directory table on an internal stage contains a file that is larger than 2GB.
Files on user stages and table stages are not replicated.
For named external stages that use a storage integration, you must configure the trust relationship for secondary storage integrations in your target accounts prior to failover. For more information, see Configure Cloud Storage Access for Secondary Storage Integrations.
If you replicate an external stage with a directory table, and you have configured automated refresh for the source directory table, you must configure automated refresh for the secondary directory table before failover. For more information, see Configure Automated Refresh for Directory Tables on Secondary Stages.
A copy command might take longer than expected if the directory table on a replicated stage is not consistent with the replicated files on the stage. To make a directory table consistent, refresh it with an ALTER STAGE … REFRESH statement. To check the consistency status of a directory table, use the SYSTEM$GET_DIRECTORY_TABLE_STATUS function.
Replication and Pipes¶
This section describes the current level of replication functionality supported for different types of pipes.
Snowflake supports replication for the following:
Pipe objects, including auto-ingest and REST endpoint pipes that load data from external stages.
Pipe-level parameters.
Privilege grants on pipe objects.
Note
To associate a secondary stage or pipe with a different cloud storage location than the one associated with the primary object, contact the support team. For example, you might choose a location in another region.
Pipes in Secondary Databases¶
Pipes in a secondary database receive notifications but do not load data until you promote the secondary database to serve as the primary. After you promote a secondary database, the pipes begin to load any data that is available since the last refresh time (that is, the last time that the former primary database was updated).
Replication of Auto-ingest Pipes¶
In the event of a failover, a replicated auto-ingest pipe becomes the new primary pipe and can do the following:
Load any data that has not yet been loaded. This includes any data that is new since the newly promoted primary database was last refreshed.
Continue to receive notifications when the stage has new files to load, and loads data from those files.
Note
To receive notifications, you must configure a secondary auto-ingest pipe in a target account prior to failover. For more information, see Configure Notifications for Secondary Auto-ingest Pipes.
Replication of REST Endpoint Pipes¶
For pipes that use the Snowpipe REST API to load data, Snowflake replicates the pipes and their load history metadata to each target account that you specify. There are no additional configuration steps you need to take on the target accounts. For a detailed list of load history metadata, see Load Metadata.
To continue data loading in the event of a failover, call the REST API from the newly-promoted source account.
Considerations¶
Note
A known issue in the preview version of this feature results in duplicate loading of files that are older than 64 days. As a workaround, you should wait 24 hours after replicating a table before you fail over.
The following constraints apply to pipe objects:
Snowflake currently supports pipe replication as part of group-based replication (replication and failover groups). Pipe replication is not supported for database replication.
Snowflake replicates the copy history of a pipe only when the pipe belongs to the same replication group as its target table.
Replication of notification integrations is not supported.
To receive notifications, you must configure a secondary auto-ingest pipe in a target account prior to failover. For more information, see Configure Notifications for Secondary Auto-ingest Pipes.
Example 1: Replicate a Named Internal Stage¶
This example demonstrates how replication works for internal stages. In particular, the example shows how the directory table is the single source of truth for table metadata before and after replication.
The first part of the example completes the following tasks in a source account.
Create an internal stage named
my_int_stage
with a directory table enabled to replicate the files on the stage. Then copy data from a table namedmy_table
into files on the stage.Note
The example refreshes the directory table after loading
file1
andfile2
onto the stage to synchronize the table metadata with the latest set of files in the stage definition for the directory tables. However, no refresh operation occurs after loadingfile3
.CREATE OR REPLACE STAGE my_stage DIRECTORY = (ENABLE = TRUE); COPY INTO @my_stage/folder1/file1 from my_table; COPY INTO @my_stage/folder2/file2 from my_table; ALTER STAGE my_stage REFRESH; COPY INTO @my_stage/folder3/file3 from my_table;
Create a failover group and set the
ENABLE_ETL_REPLICATION
parameter.CREATE FAILOVER GROUP my_stage_failover_group OBJECT_TYPES = DATABASES ALLOWED_DATABASES = my_database_1 ALLOWED_ACCOUNTS = myorg.my_account_2; ALTER FAILOVER GROUP my_stage_failover_group SET ENABLE_ETL_REPLICATION = TRUE;
The second part of the example completes the replication and failover process in a target account:
Create a failover group as a replica of the failover group in the source account, refresh the objects in the new failover group, and promote the target account to serve as the source account.
CREATE FAILOVER GROUP my_stage_failover_group AS REPLICA OF myorg.my_account_1.my_stage_failover_group; ALTER FAILOVER GROUP my_stage_failover_group REFRESH; ALTER FAILOVER GROUP my_stage_failover_group PRIMARY;
Next, refresh the directory table on the replicated stage and copy all of the files tracked by the directory table on
my_stage
into a table namedmy_table
.Note
The COPY INTO statement loads
file1
andfile2
into the table, but notfile3
. This is because the directory table was not refreshed after addingfile3
in the source account.ALTER STAGE my_stage REFRESH; COPY INTO my_table FROM @my_stage;
Example 2: Replicate an Auto-ingest Pipe¶
This example provides a sample workflow for replicating a pipe that uses an Amazon Simple Notification Service (SNS) topic with Amazon Simple Queue Service (SQS) to automate Snowpipe.
The example assumes that you have already completed the following tasks:
Created and configured a storage integration for Amazon S3. For example purposes, we use a storage integration named
my_s3_storage_int
.Created an external stage that references your storage integration. For example purposes, we use a stage named
my_s3_stage
. For instructions, see CREATE STAGE.
Start with the following tasks in a source account.
Use the CREATE PIPE command to create a pipe with auto-ingest enabled that loads data from the external stage into a table named
mytable
.CREATE PIPE snowpipe_db.public.mypipe AUTO_INGEST=TRUE AWS_SNS_TOPIC='topic_arn' AS COPY INTO snowpipe_db.public.mytable FROM @snowpipe_db.public.my_s3_stage FILE_FORMAT = (TYPE = 'JSON');
Refresh the pipe with an ALTER PIPE statement to load data from the stage from the last 7 days.
ALTER PIPE mypipe REFRESH;
Finally, use CREATE FAILOVER GROUP to create a failover group that allows replication of storage integrations, and set the
ENABLE_ETL_REPLICATION
parameter.CREATE FAILOVER GROUP my_pipe_failover_group OBJECT_TYPES = DATABASES, INTEGRATIONS ALLOWED_INTEGRATION_TYPES = STORAGE INTEGRATIONS ALLOWED_DATABASES = my_database_1 ALLOWED_ACCOUNTS = myorg.my_account_2; ALTER FAILOVER GROUP my_pipe_failover_group SET ENABLE_ETL_REPLICATION = TRUE;
The second part of the example completes the replication and failover process in a target account:
Create a failover group as a replica of the failover group in the source account.
CREATE FAILOVER GROUP my_pipe_failover_group AS REPLICA OF myorg.my_account_1.my_pipe_failover_group;
Execute a DESCRIBE INTEGRATION statement to retrieve the ARN for the AWS IAM User for your Snowflake account on the secondary deployment.
Use the ARN to grant the IAM user permissions to access your S3 bucket. See Step 5: Grant the IAM User Permissions to Access Bucket Objects.
DESC INTEGRATION my_s3_storage_int;
Call the SYSTEM$GET_AWS_SNS_IAM_POLICY system function to generate an IAM policy that grants the new SQS queue permission to subscribe to your SNS topic. Snowflake created the new SQS queue in your target account when you replicated the failover group from your source account.
SELECT SYSTEM$GET_AWS_SNS_IAM_POLICY('sns_topic_arn');
sns_topic_arn
is the Amazon Resource Name (ARN) of the SNS topic that you created for the original pipe in your source account.Refresh the objects in your new failover group.
ALTER FAILOVER GROUP my_pipe_failover_group REFRESH;
Finally, promote the target account to serve as the source account with the ALTER FAILOVER GROUP command.
ALTER FAILOVER GROUP my_pipe_failover_group PRIMARY;
The
mypipe
pipe will begin to load any data that was made available since the last time the failover group was refreshed in the source account.To verify that the replicated pipe is working, query the table from the pipe’s COPY statement.
SELECT * FROM mytable;