Install and configure the Openflow Connector for Oracle¶
Note
This connector is subject to the Snowflake Connector Terms.
Note
The Openflow Connector for Oracle is also subject to additional terms of service beyond the standard connector terms of service. For more information, see the Openflow Connector for Oracle Addendum.
This topic describes the steps to install and configure the Openflow Connector for Oracle connector.
As a data engineer, perform the following tasks to install and configure the connector:
Install the connector¶
To install the connector, do the following as a data engineer:
Navigate to the Openflow overview page. In the Featured connectors section, select View more connectors.
On the Openflow connectors page, find the connector and select Add to runtime.
In the Select runtime dialog, select your runtime from the Available runtimes drop-down list and click Add.
Note
Before you install the connector, ensure that you have created a database and schema in Snowflake for the connector to store ingested data.
Authenticate to the deployment with your Snowflake account credentials and select Allow when prompted to allow the runtime application to access your Snowflake account. The connector installation process takes a few minutes to complete.
Authenticate to the runtime with your Snowflake account credentials.
The Openflow canvas appears with the connector process group added to it.
Configure the connector¶
To configure the connector, do the following as a data engineer:
Right-click on the added runtime and select Parameters.
Populate the required parameter values.
For more information on the required parameter values, see the following sections:
Snowflake Destination Parameters: Used to establish connection with Snowflake.
Oracle Ingestion Parameters: Used to specify the tables to replicate.
Oracle Source Parameters: Used to define the configuration of data downloaded from Oracle.
Snowflake Destination Parameters¶
Parameter |
Description |
Required |
|---|---|---|
Destination Database |
The database where data will be persisted. It must already exist in Snowflake. The name is case-sensitive. For unquoted identifiers, provide the name in uppercase. |
Yes |
Snowflake Authentication Strategy |
When using:
|
Yes |
Snowflake Account Identifier |
When using:
|
Yes |
Snowflake Connection Strategy |
When using KEY_PAIR, specify the strategy for connecting to Snowflake:
|
Required for BYOC with KEY_PAIR only, otherwise ignored. |
Snowflake Private Key |
When using:
|
No |
Snowflake Private Key File |
When using:
|
No |
Snowflake Private Key Password |
When using
|
No |
Snowflake Role |
When using
|
Yes |
Snowflake Username |
When using
|
Yes |
Oversized Value Strategy |
Determines how the connector handles values that exceed its internal size limits (16 MB) during replication. Possible values are:
|
No |
Snowflake Warehouse |
Snowflake warehouse used to run queries. |
Yes |
Oracle Ingestion Parameters¶
Parameter |
Description |
|---|---|
Included Table Names |
Comma-separated list of fully-qualified table paths. Tables must be specified using fully qualified database, schema and table name format: DATABASE_NAME.SCHEMA_NAME.TABLE_NAME. For example: |
Included Table Regex |
A regular expression to match table paths for automatic inclusion of existing and new tables. The regex pattern must match the three-part naming convention: DATABASE_NAME.SCHEMA_NAME.TABLE_NAME. For example: |
Filter JSON |
A JSON array to include specific columns based on a regex pattern for given tables. |
Merge Task Schedule CRON |
A CRON expression to define when merge operations from the Journal to the Destination Table are triggered. For example, * * * * * ? for continuous merge. |
Object Identifier Resolution |
Specifies how source object identifiers such as schemas, tables, and column names are stored and queried in Snowflake. This setting determines if you must use double quotes in SQL queries. Option 1: Default, case-insensitive (recommended).
Note Snowflake recommends using this option if database objects are not expected to have mixed case names. Option 2: case-sensitive.
Important Do not change this setting after connector ingestion has begun. Changing this setting after ingestion has begun breaks the existing ingestion. If you must change this setting, create a new connector instance. |
Snapshot Fetching Strategy |
Determines the snapshot load fetching strategy:
|
Oracle Source Parameters¶
Parameter |
Description |
Required |
|---|---|---|
Oracle Connection URL |
JDBC URL of the database connection to the DB.
The URL must specify the target container (PDB or CDB) that contains the data to be replicated.
For example Note The connector works within a single database/container. Ensure the JDBC URL points directly to the container that holds the tables to be replicated. |
Yes |
Oracle Username |
Username of the connect user that has access to the XStream Server. |
Yes |
Oracle Password |
Password of the connect user that has access to the XStream Server. |
Yes |
Oracle Database Processor Multiplier |
Core Processor Licensing Factor as described in Oracle Processor Core Factor Table |
Required for Embedded License only |
Oracle Database Processor Cores |
The number of processor cores in your Oracle database. |
Required for Embedded License only |
XStream Billing Acknowledgement |
A confirmation of the licensing agreement |
Required for Embedded License only |
XStream Out Server Name |
The name of the XStream Server that must already exist in Oracle. |
Yes |
XStream Out Server URL |
JDBC URL of the database connection for XStream, must use OCI driver.
For example |
Yes |
Restart table replication¶
A table in FAILED state — for example, due to a missing primary key or unsupported schema change — does not restart automatically. If a table enters a FAILED state or you need to restart replication from scratch, use the following procedure to remove and re-add the table to replication.
Note
If the failure was caused by an issue in the source table such as a missing primary key, resolve that issue in the source database before continuing.
Remove the table from flow parameters: In the Ingestion Parameters context, either remove the table from the Included Table Names or modify the Included Table Regex so the table is no longer matched.
Verify the table has been removed:
In the Openflow runtime canvas, right-click a processor group and choose Controller Services.
In the table listing controller services, locate the Table State Store row, click the three vertical dots on the right side of the row, then choose View State.
Important
You must wait until the table’s state is fully removed from this list before proceeding. Do not continue until this configuration change has completed.
Clean up the destination: Once the table’s state shows as fully removed, manually DROP the destination table in Snowflake. Note that the connector will not overwrite an existing destination table during the snapshot phase; if the table still exists, replication will fail again. Optionally, the journal table and stream can also be removed if they are no longer needed.
Re-add the table: Update the Included Table Names or Included Table Regex parameters to include the table again.
Verify the restart: Check the Table State Store using the instructions given previously. The state of the table should appear with the status NEW, then transition to SNAPSHOT_REPLICATION, and finally INCREMENTAL_REPLICATION.
Run the flow¶
Right-click on the plane and select Enable all Controller Services.
Right-click on the imported process group and select Start. The connector starts the data ingestion.