Set up the Openflow Connector for SQL Server¶
Note
This connector is subject to the Snowflake Connector Terms.
This topic describes how to set up the Openflow Connector for SQL Server.
For information on the incremental load process, see Incremental replication.
Prerequisites¶
Before setting up the connector, ensure that you have completed the following prerequisites:
Ensure that you have reviewed About Openflow Connector for SQL Server.
Ensure that you have reviewed Supported SQL Server versions.
Ensure that you have set up your runtime deployment. For more information, see the following topics:
If you use Openflow - Snowflake Deployments, ensure that you have reviewed configuring required domains and have granted access to the required domains for the SQL Server connector.
Set up your SQL Server instance¶
Before setting up the connector, perform the following tasks in your SQL Server environment:
Note
You must perform these tasks as a database administrator.
Enable change tracking on the databases and tables that you plan to replicate, as shown in the following SQL Server example:
ALTER DATABASE <database> SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); ALTER TABLE <schema>.<table> ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ON);
Note
Run these commands for every database and table that you plan to replicate.
The connector requires that change tracking is enabled on the databases and tables before replication starts. Ensure that every table that you plan to replicate has enabled change tracking. You can also enable change tracking on additional tables while the connector is running.
Create a login for the SQL Server instance:
CREATE LOGIN <user_name> WITH PASSWORD = '<password>';
This login is used to create users for the databases you plan to replicate.
Create a user for each database you are replicating by running the following SQL Server command in each database:
USE <source_database>; CREATE USER <user_name> FOR LOGIN <user_name>;
Grant the SELECT and VIEW CHANGE TRACKING permissions to the user for each database that you are replicating:
GRANT SELECT ON <database>.<schema>.<table> TO <user_name>; GRANT VIEW CHANGE TRACKING ON <database>.<schema>.<table> TO <user_name>;
Run these commands in each database for every table that you plan to replicate. These permissions must be granted to the user of each database that you created in a previous step.
(Optional) Grant the VIEW DEFINITION privilege on the User Defined Data Types (UDDT).
If your tables contain columns that use User Defined Data Types (UDDT), and the UDDT is owned by a different user than the connector user, you must grant the VIEW DEFINITION permission to the connector user as shown in the following SQL Server example:
GRANT VIEW DEFINITION TO <user_name>;
Without this permission, columns using UDDT are silently excluded from replication.
(Optional) Configure SSL connection.
If you use an SSL connection to connect SQL Server, create the root certificate for your database server. This is required when configuring the connector.
Set up your Snowflake environment¶
As a Snowflake administrator, perform the following tasks:
Create a destination database in Snowflake to store the replicated data:
CREATE DATABASE <destination_database>;
Create a Snowflake service user:
CREATE USER <openflow_user> TYPE = SERVICE COMMENT='Service user for automated access of Openflow';
Create a Snowflake role for the connector and grant the required privileges:
CREATE ROLE <openflow_role>; GRANT ROLE <openflow_role> TO USER <openflow_user>; GRANT USAGE ON DATABASE <destination_database> TO ROLE <openflow_role>; GRANT CREATE SCHEMA ON DATABASE <destination_database> TO ROLE <openflow_role>;
Use this role to manage the connector’s access to the Snowflake database.
To create objects in the destination database, you must grant the USAGE and CREATE SCHEMA privileges on the database to the role used to manage access.
Create a Snowflake warehouse for the connector and grant the required privileges:
CREATE WAREHOUSE <openflow_warehouse> WITH WAREHOUSE_SIZE = 'XSMALL' AUTO_SUSPEND = 300 AUTO_RESUME = TRUE; GRANT USAGE, OPERATE ON WAREHOUSE <openflow_warehouse> TO ROLE <openflow_role>;
Snowflake recommends starting with a XSMALL warehouse size, then experimenting with size depending on the number of tables being replicated and the amount of data transferred. Large numbers of tables typically scale better with multi-cluster warehouses, rather than a larger warehouse size. For more information, see multi-cluster warehouses.
Set up the public and private keys for key pair authentication:
Create a pair of secure keys (public and private).
Store the private key for the user in a file to supply to the connector’s configuration.
Assign the public key to the Snowflake service user:
ALTER USER <openflow_user> SET RSA_PUBLIC_KEY = 'thekey';
For more information, see Key-pair authentication and key-pair rotation.
Install the connector¶
To install the connector, do the following as a data engineer:
Navigate to the Openflow overview page. In the Featured connectors section, select View more connectors.
On the Openflow connectors page, find the connector and select Add to runtime.
In the Select runtime dialog, select your runtime from the Available runtimes drop-down list and click Add.
Note
Before you install the connector, ensure that you have created a database and schema in Snowflake for the connector to store ingested data.
Authenticate to the deployment with your Snowflake account credentials and select Allow when prompted to allow the runtime application to access your Snowflake account. The connector installation process takes a few minutes to complete.
Authenticate to the runtime with your Snowflake account credentials.
The Openflow canvas appears with the connector process group added to it.
Configure the connector¶
To configure the connector, do the following as a data engineer:
Right-click on the imported process group and select Parameters.
Populate the required parameter values as described in Flow parameters.
Flow parameters¶
Start by setting the parameters of the SQLServer Source Parameters context, then the SQLServer Destination Parameters context. After you complete this, enable the connector. The connector connects to both SQLServer and Snowflake and starts running. However, the connector does not replicate any data until any tables to be replicated are explicitly added to its configuration.
To configure specific tables for replication, edit the SQLServer Ingestion Parameters context. After you apply the changes to the SQLServer Ingestion Parameters context, the configuration is picked up by the connector, and the replication lifecycle starts for every table.
SQLServer Source Parameters context¶
Parameter |
Description |
|---|---|
SQL Server Connection URL |
The full JDBC URL to the source database. Example:
|
SQL Server JDBC Driver |
Select the Reference asset checkbox to upload the SQL Server JDBC driver. |
SQL Server Username |
The user name for the connector. |
SQL Server Password |
The password for the connector. |
SQLServer Destination Parameters context¶
Parameter |
Description |
Required |
|---|---|---|
Destination Database |
The database where data is persisted. It must already exist in Snowflake. The name is case-sensitive. For unquoted identifiers, provide the name in uppercase. |
Yes |
Snowflake Authentication Strategy |
When using:
|
Yes |
Snowflake Account Identifier |
When using:
|
Yes |
Snowflake Connection Strategy |
When using KEY_PAIR, specify the strategy for connecting to Snowflake:
|
Required for BYOC with KEY_PAIR only, otherwise ignored. |
Snowflake Object Identifier Resolution |
Specifies how source object identifiers such as schemas, tables, and columns names are stored and queried in Snowflake. This setting dictates whether you must use double quotes in SQL queries. Option 1: Default, case-insensitive (recommended).
Note Snowflake recommends using this option if database objects are not expected to have mixed case names. Important Do not change this setting after connector ingestion has begun. Changing this setting after ingestion has begun breaks the existing ingestion. If you must change this setting, create a new connector instance. Option 2: case-sensitive.
Note Snowflake recommends using this option if you must preserve source casing for legacy or compatibility reasons.
For example, if the source database includes table names that differ in case only, such as |
Yes |
Snowflake Private Key |
When using:
|
No |
Snowflake Private Key File |
When using:
|
No |
Snowflake Private Key Password |
When using:
|
No |
Snowflake Role |
When using:
|
Yes |
Snowflake Username |
When using:
|
Yes |
Snowflake Warehouse |
Snowflake warehouse used to run queries. |
Yes |
SQLServer Ingestion Parameters context¶
Parameter |
Description |
|---|---|
Included Table Names |
A comma-separated list of source table paths, including their databases and schemas, for example:
|
Included Table Regex |
A regular expression to match against table paths, including database and schema names. Every path matching the expression is replicated, and new tables matching the pattern that are created later are also included automatically, for example:
|
Filter JSON |
A JSON containing a list of fully-qualified table names and a regex pattern for column names that should be included into replication. The following example includes all columns that end with
|
Merge Task Schedule CRON |
CRON expression defining periods when merge operations from Journal to Destination Table will be
triggered. Set it to For example:
For additional information and examples, see the cron triggers tutorial in the Quartz Documentation |
Restart table replication¶
A table in FAILED state — for example, due to a missing primary key or unsupported schema change — does not restart automatically. If a table enters a FAILED state or you need to restart replication from scratch, use the following procedure to remove and re-add the table to replication.
Note
If the failure was caused by an issue in the source table such as a missing primary key, resolve that issue in the source database before continuing.
Remove the table from flow parameters: In the Ingestion Parameters context, either remove the table from the Included Table Names or modify the Included Table Regex so the table is no longer matched.
Verify the table has been removed:
In the Openflow runtime canvas, right-click a processor group and choose Controller Services.
In the table listing controller services, locate the Table State Store row, click the three vertical dots on the right side of the row, then choose View State.
Important
You must wait until the table’s state is fully removed from this list before proceeding. Do not continue until this configuration change has completed.
Clean up the destination: Once the table’s state shows as fully removed, manually DROP the destination table in Snowflake. Note that the connector will not overwrite an existing destination table during the snapshot phase; if the table still exists, replication will fail again. Optionally, the journal table and stream can also be removed if they are no longer needed.
Re-add the table: Update the Included Table Names or Included Table Regex parameters to include the table again.
Verify the restart: Check the Table State Store using the instructions given previously. The state of the table should appear with the status NEW, then transition to SNAPSHOT_REPLICATION, and finally INCREMENTAL_REPLICATION.
Replicate a subset of columns in a table¶
The connector filters the data replicated per table to a subset of configured columns.
To apply filters to columns, modify the Column Filter property in the Replication Parameters context, adding an array of configurations, one entry for every table to which you want to apply a filter.
Include or exclude columns by name or pattern. You can apply a single condition per table, or combine multiple conditions, with exclusions always taking precedence over inclusions.
The following example shows the fields that are available. The schema and table fields are mandatory. One or
more of included, excluded, includedPattern, excludedPattern is required.
[
{
"schema": "<source table schema>",
"table" : "<source table name>",
"included": ["<column name>", "<column name>"],
"excluded": ["<column name>", "<column name>"],
"includedPattern": "<regular expression>",
"excludedPattern": "<regular expression>",
}
]
Track data changes in tables¶
The connector replicates the current state of data from the source tables, as well as every state of every row from every changeset. This data is stored in journal tables created in the same schema as the destination table.
The journal table names are formatted as: <source_table_name>_JOURNAL_<timestamp>_<schema_generation>
where <timestamp> is the value of epoch seconds when the source table was added to replication, and <schema_generation> is an integer increasing with every schema change on the source table.
As a result, source tables that undergo schema changes will have multiple journal tables.
When you remove a table from replication, then add it back, the <timestamp> value changes, and <schema_generation> starts again from 1.
Important
Snowflake recommends not altering the structure of journal tables in any way. The connector uses them to update the destination table as part of the replication process.
The connector never drops journal tables, but uses the latest journal for every replicated source table, only reading append-only streams on top of journals. To reclaim the storage, you can:
Truncate all journal tables at any time.
Drop the journal tables related to source tables that were removed from replication.
Drop all but the latest generation journal tables for actively replicated tables.
For example, if your connector is set to actively replicate source table orders,
and you have earlier removed table customers from replication, you may have
the following journal tables. In this case you can drop all of them except orders_5678_2.
customers_1234_1
customers_1234_2
orders_5678_1
orders_5678_2
Configure scheduling of merge tasks¶
The connector uses a warehouse to merge change data capture (CDC) data into destination tables. This operation is triggered by the MergeSnowflakeJournalTable processor. If there are no new changes or if no new flow files are waiting in the MergeSnowflakeJournalTable queue, no merge is triggered and the warehouse auto-suspends.
Use the CRON expression in the Merge task Schedule CRON parameter to limit the warehouse cost and limit merges to only scheduled time. It throttles the flow files coming to the MergeSnowflakeJournalTable processor and merges are triggered only in a dedicated period of time. For more information about scheduling, see Scheduling strategy.
Run the flow¶
Right-click on the plane and select Enable all Controller Services.
Right-click on the imported process group and select Start. The connector starts the data ingestion.