Set up the Openflow Connector for HubSpot

Note

The connector is subject to the Connector Terms.

This topic describes the steps to set up the Openflow Connector for HubSpot.

Prerequisites

  1. Ensure that you have reviewed About Openflow Connector for HubSpot.

  2. Ensure that you have Set up Openflow - BYOC or Set up Openflow - Snowflake Deployment - Task overview.

Get the credentials

As a HubSpot administrator, generate a HubSpot private app token or create one in your HubSpot account. This lets you authenticate your requests to the HubSpot API.

  1. Log in to your HubSpot account.

  2. Navigate to Settings by selecting the gear icon in the top navigation bar.

  3. In the left navigation, go to Integrations » Private Apps.

  4. Select Create a private app.

    1. Enter a name for your app.

    2. Navigate to the Scopes tab.

    3. Select the scopes required for the API requests you intend to make. To find scopes required for the API requests, see Scopes.

    4. Select Create app.

    5. Set the required scopes for the API requests you intend to make for each endpoint.

  5. Select View access token to view the access token. Paste the token in the connector parameters, or save it securely.

Set up Snowflake account

As a Snowflake account administrator, perform the following tasks:

  1. Create a new role or use an existing role and grant the Database privileges and View privileges.

  2. Create a new Snowflake service user with the type as SERVICE.

  3. Grant the Snowflake service user the role you created in the previous steps.

  4. Configure with key-pair auth for the Snowflake SERVICE user from step 2.

  5. Snowflake strongly recommends this step. Configure a secrets manager supported by Openflow, for example, AWS, Azure, and Hashicorp, and store the public and private keys in the secret store.

    Note

    If for any reason, you do not wish to use a secrets manager, then you are responsible for safeguarding the public key and private key files used for key-pair authentication according to the security policies of your organization.

    1. Once the secrets manager is configured, determine how you will authenticate to it. On AWS, it’s recommended that you the EC2 instance role associated with Openflow as this way no other secrets have to be persisted.

    2. In Openflow, configure a Parameter Provider associated with this Secrets Manager, from the hamburger menu in the upper right. Navigate to Controller Settings » Parameter Provider and then fetch your parameter values.

    3. At this point all credentials can be referenced with the associated parameter paths and no sensitive values need to be persisted within Openflow.

  6. If any other Snowflake users require access to the raw ingested documents and tables ingested by the connector (for example, for custom processing in Snowflake), then grant those users the role created in step 1.

  7. Create a database and schema in Snowflake for the connector to store ingested data. Grant the following Database privileges to the role created in the first step.

    CREATE DATABASE hubspot_destination_db;
    CREATE SCHEMA hubspot_destination_db.hubspot_destination_schema;
    GRANT USAGE ON DATABASE hubspot_destination_db TO ROLE <hubspot_connector_role>;
    GRANT USAGE ON SCHEMA hubspot_destination_db.hubspot_destination_schema TO ROLE <hubspot_connector_role>;
    GRANT CREATE TABLE, CREATE VIEW ON SCHEMA hubspot_destination_db.hubspot_destination_schema TO ROLE <hubspot_connector_role>;
    
    Copy
  8. Create a warehouse that will be used by the connector or use an existing one. Start with the smallest warehouse size, then experiment with size depending on the number of tables being replicated, and the amount of data transferred. Large table numbers typically scale better with multi-cluster warehouses, rather than larger warehouse sizes.

  9. Ensure that the user with role used by the connector has the required privileges to use the warehouse. If that’s not the case then grant the required privileges to the role.

    CREATE WAREHOUSE hubspot_connector_warehouse WITH WAREHOUSE_SIZE = 'X-Small';
    GRANT USAGE ON WAREHOUSE hubspot_connector_warehouse TO ROLE <hubspot_connector_role>;
    
    Copy

Set up the connector

As a data engineer, perform the following tasks to install and configure the connector:

Install the connector

  1. Navigate to the Openflow Overview page. In the Featured connectors section, select View more connectors.

  2. On the Openflow connectors page, find the connector and select Add to runtime.

  3. In the Select runtime dialog, select your runtime from the Available runtimes drop-down list.

  4. Select Add.

    Note

    Before you install the connector, ensure that you have created a database and schema in Snowflake for the connector to store ingested data.

  5. Authenticate to the deployment with your Snowflake account credentials and select Allow when prompted to allow the runtime application to access your Snowflake account. The connector installation process takes a few minutes to complete.

  6. Authenticate to the runtime with your Snowflake account credentials.

The Openflow canvas appears with the connector process group added to it.

Configure the connector

  1. Right-click on the imported process group and select Parameters.

  2. Populate the required parameter values as described in Flow parameters.

Flow parameters

This section describes the flow parameters that you can configure based on the following parameter contexts:

HubSpot Source Parameters

Parameter

Description

HubSpot Access Token

HubSpot Private Application access token.

HubSpot Destination Parameters

Parameter

Description

Required

Destination Database

The database where data will be persisted. It must already exist in Snowflake. The name is case-sensitive. For unquoted identifiers, provide the name in uppercase.

Yes

Destination Schema

The schema where data will be persisted, which must already exist in Snowflake. The name is case-sensitive. For unquoted identifiers, provide the name in uppercase.

See the following examples:

  • CREATE SCHEMA SCHEMA_NAME or CREATE SCHEMA schema_name: use SCHEMA_NAME

  • CREATE SCHEMA "schema_name" or CREATE SCHEMA "SCHEMA_NAME": use schema_name or SCHEMA_NAME, respectively

Yes

Snowflake Account Identifier

When using:

  • Session Token Authentication Strategy: Must be blank.

  • KEY_PAIR: Snowflake account name formatted as [organization-name]-[account-name] where data will be persisted.

Yes

Snowflake Authentication Strategy

When using:

  • Snowflake Openflow Deployment: Use SNOWFLAKE_SESSION_TOKEN. This token is managed automatically by Snowflake.

  • BYOC: Use KEY_PAIR as the value for authentication strategy.

Yes

Snowflake Private Key

When using:

  • Session Token Authentication Strategy: Must be blank.

  • KEY_PAIR: Must be the RSA private key used for authentication.

    The RSA key must be formatted according to PKCS8 standards and have standard PEM headers and footers. Note that either Snowflake Private Key File or Snowflake Private Key must be defined.

No

Snowflake Private Key File

When using:

  • Session token authentication strategy: The private key file must be blank.

  • KEY_PAIR: Upload the file that contains the RSA private key used for authentication to Snowflake, formatted according to PKCS8 standards and including standard PEM headers and footers. The header line begins with -----BEGIN PRIVATE. To upload the private key file, select the Reference asset checkbox.

No

Snowflake Private Key Password

When using

  • Session Token Authentication Strategy: Must be blank.

  • KEY_PAIR: Provide the password associated with the Snowflake Private Key File.

No

Snowflake Role

When using

  • Session Token Authentication Strategy: Use your Runtime Role. You can find your Runtime Role in the Openflow UI, by navigating to View Details for your Runtime.

  • KEY_PAIR Authentication Strategy: Use a valid role configured for your service user.

Yes

Snowflake Username

When using

  • Session Token Authentication Strategy: Must be blank.

  • KEY_PAIR: Provide the user name used to connect to the Snowflake instance.

Yes

Snowflake Warehouse

Snowflake warehouse used to run queries.

Yes

HubSpot Ingestion Parameters

Parameter

Description

Object Types

List of comma-separated HubSpot object types to ingest.

Supported object type values are:

  • Products

  • Contacts

  • Companies

  • Feedback Submissions

  • Leads

  • Deals

  • Tickets

  • Goals

  • Line Items

Updated After

Filter objects updated after specified date or time. This parameter is optional.

Data Ingestion Schedule

Time between the next schedule. It should have a valid time duration, such as 30 minutes or 1 hour.

Run the flow

  1. Right-click on the plane and select Enable all Controller Services.

  2. Right-click on the imported process group and select Start. The connector starts the data ingestion.

Reconfigure the connector

You can modify the connector parameters after the connector has started ingesting data. If the issue query criteria changes, perform the following steps to make sure that the data in the destination table is consistent.

  1. Stop the connector: Ensure that all Openflow processors are stopped.

  2. Access configuration settings: Navigate to the connector’s configuration settings within the Snowflake Openflow interface.

  3. Modify parameters: Adjust the parameters as required.

  4. Clear processor state: If you are changing ingestion criteria, then Snowflake strongly recommends that you start ingestion from the beginning to keep the data in the destination table consistent. After clearing the state in the List Fresh HubSpot Objects processor, the connector will fetch all the objects from the beginning. Manual truncation of the destination table may be needed to prevent duplication of rows.

Data structure and views

The connector stores data in the following two formats within your Snowflake database:

Raw data storage

All raw HubSpot data is stored in tables with the exact names specified in the Object Types parameter. For example:

  • If you configure Products,Contacts,Companies in the Object Types parameter, the connector creates three tables: PRODUCTS, CONTACTS, and COMPANIES.

  • Each table contains the complete JSON payload from the HubSpot API responses.

  • Raw data preserves the original structure and all metadata from HubSpot.

Flattened views

For easier querying and analysis, the connector automatically creates flattened views for each object type:

  • Each raw table has a corresponding view with the suffix _VIEW. For example: PRODUCTS_VIEW, CONTACTS_VIEW, and COMPANIES_VIEW.

  • Views extract commonly used fields from the JSON payload into individual columns.

  • Complex nested structures are flattened for simplified SQL queries.