About Openflow Connector for Oracle

Note

The connector is subject to the Connector Terms.

This topic describes the basic concepts of Openflow Connector for Oracle, its workflow, and limitations.

The Openflow Connector for Oracle connects an Oracle database instance to Snowflake and replicates data from selected tables in near real-time or on a specified schedule. The connector also creates a log of all data changes, which is available along with the current state of the replicated tables.

Use this connector if you’re looking to do the following:

  • Replicate Oracle database tables into Snowflake for comprehensive, centralized reporting.

How tables are replicated

The tables are replicated in the following stages:

  1. Schema introspection: The connector discovers the columns in the source table, including the column names and types, then validates them against Snowflake’s and the connector’s Limitations. Validation failures cause this stage to fail, and the cycle completes. After successful completion of this stage, the connector creates an empty destination table.

  2. Snapshot load: The connector copies all data available in the source table into the destination table. If this stage fails, then no more data is replicated. After successful completion, the data from the source table is available in the destination table.

  3. Incremental load: The connector tracks changes in the source table and applies those changes to the destination table. This process continues until the table is removed from replication. Failure at this stage permanently stops replication of the source table, until the issue is resolved.

Note

Interim failures (such as connection errors) do not prevent the table from being replicated. However, permanent failures (such as unsupported data types) do prevent the table from being replicated.

If a permanent failure prevents a table from being replicated, remove the table from the list of tables to be replicated. After you address the problem that caused the failure, you can add the table back to the list of tables to be replicated.

Openflow requirements

  • The runtime size must be at least Medium. Use a bigger runtime when replicating large data volumes, especially when row sizes are large.

  • The connector does not support multi-node Openflow runtimes. Configure the runtime for this connector with Min nodes and Max nodes set to 1.

Limitations

  • AWS RDS for Oracle is not supported.

  • Oracle SaaS offerings such as Oracle Fusion Cloud Applications are not supported.

  • Only Oracle database versions 12cR2 and later are supported.

  • Only database tables containing primary keys can be replicated.

Next steps

Set up tasks for the Openflow Connector for Oracle