Openflow Connector for Salesforce Bulk API: Configure the connector¶
Note
This connector is subject to the Snowflake Connector Terms.
This topic describes the steps to configure the Openflow Connector for Salesforce Bulk API.
Install the connector¶
Follow these steps to install the Openflow Connector for Salesforce Bulk API in an Openflow runtime:
Navigate to the Openflow Overview page. In the Featured connectors section, select View more connectors.
On the Openflow connectors page, find Openflow connector for Salesforce Bulk API and select Add to runtime.
In the Select runtime dialog, select your runtime from the Available runtimes drop-down.
The Openflow canvas appears with the connector process group added to it.
Configure the connector¶
To configure the connector, perform the following steps:
Right-click on the imported process group and select Parameters.
Populate the required parameter values as described in the table below.
Parameter |
Description |
|---|---|
Column Removal Strategy |
Defines the strategy to adopt when a column should be removed in the destination table based on the latest received schema. Three possible values:
|
Connected App Key |
Copy-paste the content of the |
Connected App Key File |
You can directly upload the |
Connected App Key Password |
Password set on the private key file during the Salesforce Setup steps. |
Destination Database |
Name of the database in Snowflake where the Salesforce data will be replicated. The database must exist before starting the connector. |
Destination Schema |
Name of the schema, in the database above, into which the connector will create tables for the Salesforce data to be added. The schema must exist before starting the connector. |
Filter |
Comma-separated list of objects to replicate from Salesforce, or regular expression to apply against all existing objects. Example: Note If left empty, all objects will be replicated. This is not recommended as there are usually thousands of objects in a Salesforce instance. |
Incremental Offload |
Whether the processor should perform incremental offload. If |
Initial Load Chunking |
If set to a value other than This is useful for large datasets where loading all historical data in a single query may time out, exceed API limits, or exceed the storage size of the content repository of the runtime. Once caught up, the processor will continue with normal incremental offload behavior. |
OAuth2 Audience |
Audience to set in the JWT token. This is usually set to |
OAuth2 Client ID |
Should be set to the Consumer Key value retrieved during the Salesforce Setup steps. |
OAuth2 Subject |
Should be set to the username of an admin-approved user for the application to interact with Salesforce APIs on behalf of this user. |
OAuth2 Token Endpoint URL |
Endpoint to negotiate tokens via the JWT Bearer Flow. Example: |
Object Fields Filter JSON |
A JSON specifying which fields and field patterns should be included or excluded, per Salesforce object. Takes the form of an array with one item per object. Example 1: This will include all fields that end with ‘name’ in the ‘Account’ Salesforce object:
Example 2: This will include the fields Id, Name, and Revenue in the ‘Account’ Salesforce object:
|
Object Identifier Resolution |
Determines if schema / table / column names are treated as case-sensitive or case-insensitive. One of: Note Changing this parameter value will require clearing the state and doing a full reload of all objects. |
Removed Column Name Suffix |
Suffix added to the column name when the parameter Column Removal Strategy is set to |
Run Schedule |
Frequency at which the connector will check for updates in Salesforce for configured objects via the Filter parameter. Default: |
Salesforce Instance |
Hostname of the Salesforce instance including the domain name. Do not include the protocol prefix ( |
Snowflake Account Identifier |
Snowflake account name formatted as |
Snowflake Username |
The name of the service user that the connector uses to connect to Snowflake. The service user is required only when using the |
Snowflake Private Key |
The RSA Private Key that the connector use for authentication to Snowflake, formatted according to PKCS8 standards and including standard PEM headers and footers. The header line starts with You may also use the next parameter to upload the private key to the Openflow runtime instead. |
Snowflake Private Key File |
The file containing the RSA Private Key that the connector uses for authentication to Snowflake, formatted according to PKCS8 standards and including standard PEM headers and footers. The header line starts with Select the Reference asset checkbox to upload the private key file and store it securely in the Openflow runtime. |
Snowflake Private Key Password |
The password associated with the Snowflake Private Key File (if encrypted). This is required only when using the |
Snowflake Role |
Name of the Snowflake role used during query execution. When using |
Snowflake Authentication Strategy |
Authentication strategy for the connector to connect to Snowflake. Using |
Snowflake Warehouse |
The Snowflake warehouse used to run queries. |
Special Objects Filter |
Comma-separated list of objects to offload from Salesforce (using direct API access), or regular expression to apply against all existing objects. This filter should only be used for objects that are not supported by the Salesforce Bulk API such as knowledge data, for example. This parameter should not overlap with the parameter Filter. Example: |
Run the connector¶
Follow these steps to start the connector and begin replicating data from Salesforce to Snowflake:
Right-click on an empty area in the canvas and select Enable all Controller Services.
Right-click on the connector process group and select Start.
Manage object replication¶
After the connector has been started and objects have been replicated, you can add new objects or remove existing objects from replication.
Add new objects to replication¶
To add a new object to replication, update the Filter parameter (or Special Objects Filter parameter, if applicable) with the new object names. You do not need to stop the connector. The new object is replicated at the next scheduled execution.
For example, if the current Filter value is Account, Opportunity and you want to add the Contact object, change the value to Account, Opportunity, Contact.
Remove objects from replication¶
Removing an object from replication requires stopping the connector and cleaning up both the connector state and the destination table in Snowflake:
Stop all processors in the flow by right-clicking on the connector process group and selecting Stop.
Ensure that no in-flight FlowFiles are being processed.
Right-click on the canvas and select Parameters, then remove the object name from the Filter parameter (or the Special Objects Filter parameter, if applicable).
Right-click on the canvas and select Disable all controller services.
Go to Controller services and open the state of the controller service named Salesforce Bulk Jobs State.
Select the trash icon next to the object type you removed to delete its state entry.
Right-click on the canvas and select Enable all controller services, then start all processors to resume the connector.
If applicable, drop the corresponding table from the Snowflake destination database to clean up the previously replicated data. For example:
DROP TABLE <database_name>.<schema_name>.<object_name>;
Next steps¶
To monitor and troubleshoot the connector, see Openflow Connector for Salesforce Bulk API: Troubleshooting.