Set up the Openflow Connector for HubSpot¶

Note

The connector is subject to the Connector Terms.

This topic describes the steps to set up the Openflow Connector for HubSpot.

Prerequisites¶

  1. Ensure that you have reviewed About Openflow Connector for HubSpot.

  2. Ensure that you have set up Openflow.

Get the credentials¶

As a HubSpot administrator, generate a HubSpot private app token or create one in your HubSpot account. This lets you authenticate your requests to the HubSpot API.

  1. Log in to your HubSpot account.

  2. Navigate to Settings by selecting the gear icon in the top navigation bar.

  3. In the left navigation, go to Integrations » Private Apps.

  4. Select Create a private app.

    1. Enter a name for your app.

    2. Navigate to the Scopes tab.

    3. Select the scopes required for the API requests you intend to make. To find scopes required for the API requests, see Scopes.

    4. Select Create app.

    5. Set the required scopes for the API requests you intend to make for each endpoint.

  5. Select View access token to view the access token. Paste the token in the connector parameters, or save it securely.

Set up Snowflake account¶

As a Snowflake account administrator, perform the following tasks:

  1. Create a new role or use an existing role and grant the Database privileges.

  2. Create a new Snowflake service user with the type as SERVICE.

  3. Grant the Snowflake service user the role you created in the previous steps.

  4. Configure with key-pair auth for the Snowflake SERVICE user from step 2.

  5. Snowflake strongly recommends this step. Configure a secrets manager supported by Openflow, for example, AWS, Azure, and Hashicorp, and store the public and private keys in the secret store.

    Note

    If for any reason, you do not wish to use a secrets manager, then you are responsible for safeguarding the public key and private key files used for key-pair authentication according to the security policies of your organization.

    1. Once the secrets manager is configured, determine how you will authenticate to it. On AWS, it’s recommended that you the EC2 instance role associated with Openflow as this way no other secrets have to be persisted.

    2. In Openflow, configure a Parameter Provider associated with this Secrets Manager, from the hamburger menu in the upper right. Navigate to Controller Settings » Parameter Provider and then fetch your parameter values.

    3. At this point all credentials can be referenced with the associated parameter paths and no sensitive values need to be persisted within Openflow.

  6. If any other Snowflake users require access to the raw ingested documents and tables ingested by the connector (for example, for custom processing in Snowflake), then grant those users the role created in step 1.

  7. Designate a warehouse for the connector to use. Start with the smallest warehouse size, then experiment with size depending on the number of tables being replicated, and the amount of data transferred. Large table numbers typically scale better with multi-cluster warehouses, rather than larger warehouse sizes.

Configure the connector¶

As a data engineer, perform the following tasks to configure a connector:

  1. Create a database and schema in Snowflake for the connector to store ingested data.

  2. Download the connector definition file.

  3. Import the connector definition into Openflow:

    1. Open the Snowflake Openflow canvas.

    2. Add a process group. To do this, drag and drop the Process Group icon from the tool palette at the top of the page onto the canvas. Once you release your pointer, a Create Process Group dialog appears.

    3. On the Create Process Group dialog, select the connector definition file to import.

  4. Right-click on the imported process group and select Parameters.

  5. Populate the required parameter values as described in Flow parameters.

Flow parameters¶

The following table decribes the flow parameters that you can configure.

Parameter

Description

Required

Snowflake Account Identifier

The Snowflake account, in the [organization-name]-[account-name] format, where data retrieved from the HubSpot API is stored.

Yes

Snowflake Private Key

The RSA private key used for authentication. The RSA key must be formatted according to PKCS8 standards and have standard PEM headers and footers. Note that either Snowflake Private Key File or Snowflake Private Key must be defined

No

Snowflake Private Key File

The file that contains the RSA private key used for authentication to Snowflake, which is formatted according to PKCS8 standards and has standard PEM headers and footers. The header line starts with -----BEGIN PRIVATE.

No

Snowflake Private Key Password

The password associated with the Snowflake Private Key File

No

Snowflake User Role

The Snowflake role that’s used to create, retrieve, update, and delete the tables and data used for this connector.

Yes

Snowflake Username

The name of the Snowflake user that the connector uses.

Yes

Destination Warehouse

Snowflake warehouse name

Yes

Destination Database

Snowflake destination schema. Schema must be created in advance.

Yes

HubSpot Access Token

HubSpot Private Application access token.

Yes

Object Type

Type of HubSpot object to ingest. Possible values are:

  • Products

  • Contacts

  • Companies

  • Feedback Submissions

  • Leads

  • Deals

  • Tickets

  • Goals

  • Line Items

Yes

Updated After

Filter objects updated after specified date or time

No

Archived Data Sync Schedule

Time between the next schedule. It should have a valid time duration, such as 30 minutes or 1 hour.

Yes

Fresh Data Sync Schedule

Time between the next schedule. It should have a valid time duration, such as 30 minutes or 1 hour.

Yes

Run the flow¶

  1. Right-click on the plane and select Enable all Controller Services.

  2. Right-click on the imported process group and select Start. The connector starts the data ingestion.

Reconfigure the connector¶

You can modify the connector parameters after the connector has started ingesting data. If the issue query criteria changes, perform the following steps to make sure that the data in the destination table is consistent.

  1. Stop the connector: Ensure that all Openflow processors are stopped.

  2. Access configuration settings: Navigate to the connector’s configuration settings within the Snowflake Openflow interface.

  3. Modify parameters: Adjust the parameters as required.

  4. Clear processor state: If you are changing ingestion criteria, then Snowflake strongly recommends that you start ingestion from the beginning to keep the data in the destination table consistent. After clearing the state in the IngestJiraIssues processor, the connector will fetch all the issues from the beginning. Manual truncation of the destination table may be needed to prevent duplication of rows.