Set up the Openflow Connector for LinkedIn Ads¶

Note

The connector is subject to the Connector Terms.

This topic describes the steps to set up the Openflow Connector for LinkedIn Ads.

Prerequisites¶

  1. Ensure that you have reviewed About Openflow Connector for LinkedIn Ads.

  2. Ensure that you have set up Openflow.

Get the credentials¶

  1. As a LinkedIn Ads user, perform the following tasks:

    1. Optional: If you don’t have an ad account to run and manage campaigns, create one.

    2. Ensure that the user account has at least a VIEWER role on the ad account.

    3. Use the user account to apply for Advertising API access. For more information, see the Microsoft quick start.

    4. Obtain a refresh token. Use 3-legged oAuth and the r_ads_reporting scope.

    5. Obtain the client ID, client secret from the LinkedIn Developer Portal. These credentials are available in the in the Auth tab in App Details.

Set up Snowflake account¶

As a Snowflake account administrator, perform the following tasks:

  1. Create a new role or use an existing role.

  2. Create a new Snowflake service user with the type as SERVICE.

  3. Grant the Snowflake service user the role you created in the previous steps.

  4. Configure with key-pair auth for the Snowflake SERVICE user from step 2.

  5. Snowflake strongly recommends this step. Configure a secrets manager supported by Openflow, for example, AWS, Azure, and Hashicorp, and store the public and private keys in the secret store.

    Note

    If for any reason, you do not wish to use a secrets manager, then you are responsible for safeguarding the public key and private key files used for key-pair authentication according to the security policies of your organization.

  6. Once the secrets manager is configured, determine how you will authenticate to it. On AWS, it’s recommended that you the EC2 instance role associated with Openflow as this way no other secrets have to be persisted.

  7. In Openflow, configure a Parameter Provider associated with this Secrets Manager, from the hamburger menu in the upper right. Navigate to Controller Settings » Parameter Provider and then fetch your parameter values.

  8. At this point all credentials can be referenced with the associated parameter paths and no sensitive values need to be persisted within Openflow.

  9. If any other Snowflake users require access to the raw ingested documents and tables ingested by the connector (for example, for custom processing in Snowflake), then grant those users the role created in step 1.

  10. Create a database and schema in Snowflake for the connector to store ingested data. Grant required Database privileges to the role created in the first step. Substitute the role placeholder with the actual value and use the following sql commands:

    CREATE DATABASE linkedin_destination_db;
    CREATE SCHEMA linkedin_destination_db.linkedin_destination_schema;
    GRANT USAGE ON DATABASE linkedin_destination_db TO ROLE <linkedin_connector_role>;
    GRANT USAGE ON SCHEMA linkedin_destination_db.linkedin_destination_schema TO ROLE <linkedin_connector_role>;
    GRANT CREATE TABLE ON SCHEMA linkedin_destination_db.linkedin_destination_schema TO ROLE <linkedin_connector_role>;
    
    Copy
  11. Create a warehouse that will be used by the connector or use an existing one. Start with the smallest warehouse size, then experiment with size depending on the number of tables being replicated, and the amount of data transferred. Large table numbers typically scale better with multi-cluster warehouses, rather than larger warehouse sizes.

  12. Ensure that the user with role used by the connector has the required privileges to use the warehouse. If that’s not the case then grant the required privileges to the role.

    CREATE WAREHOUSE linkedin_connector_warehouse WITH WAREHOUSE_SIZE = 'X-Small';
    GRANT USAGE ON WAREHOUSE linkedin_connector_warehouse TO ROLE <linkedin_connector_role>;
    
    Copy

Set up the connector¶

As a data engineer, perform the following tasks to install and configure the connector:

Install the connector¶

  1. Create a database and schema in Snowflake for the connector to store ingested data. Grant required Database privileges to the role created in the first step. Substitute the role placeholder with the actual value and use the following SQL commands:

    CREATE DATABASE DESTINATION_DB;
    CREATE SCHEMA DESTINATION_DB.DESTINATION_SCHEMA;
    GRANT USAGE ON DATABASE DESTINATION_DB TO ROLE <CONNECTOR_ROLE>;
    GRANT USAGE ON SCHEMA DESTINATION_DB.DESTINATION_SCHEMA TO ROLE <CONNECTOR_ROLE>;
    GRANT CREATE TABLE ON SCHEMA DESTINATION_DB.DESTINATION_SCHEMA TO ROLE <CONNECTOR_ROLE>;
    
    Copy
  1. Navigate to the Openflow Overview page. In the Featured connectors section, select View more connectors.

  2. On the Openflow connectors page, find the connector and select Add to runtime.

  3. In the Select runtime dialog, select your runtime from the Available runtimes drop-down list.

  4. Select Add.

    Note

    Before you install the connector, ensure that you have created a database and schema in Snowflake for the connector to store ingested data.

  5. Authenticate to the deployment with your Snowflake account credentials and select Allow when prompted to allow the runtime application to access your Snowflake account. The connector installation process takes a few minutes to complete.

  6. Authenticate to the runtime with your Snowflake account credentials.

The Openflow canvas appears with the connector process group added to it.

Configure the connector¶

Note

Each process group is responsible for fetching data for a single report configuration. To use multiple configurations on a regular schedule, create a separate process group for each report configuration.

  1. Right-click on the imported process group and select Parameters.

  2. Populate the required parameter values as described in Flow parameters.

Flow parameters¶

This section describes the flow parameters that you can configure based on the following parameter contexts:

Linkedin Ads Source Parameters¶

Parameter

Description

Client ID

The client ID of an application registered on LinkedIn

Client Secret

The client secret related to the client ID

Refresh Token

A user obtains the refresh token after the app registration process. They use it together with the client ID and the client secret to get an access token.

Token Endpoint

The token endpoint is obtained by a user during the app registration process

Linkedin Ads Destination Parameters¶

Parameter

Description

Destination Database

The database where data will be persisted. It must already exist in Snowflake

Destination Schema

The schema where data will be persisted. It must already exist in Snowflake

Snowflake Account Identifier

Snowflake account name formatted as [organization-name]-[account-name] where data will be persisted

Snowflake Authentication Strategy

Strategy of authentication to Snowflake. Possible values: SNOWFLAKE_SESSION_TOKEN - when we are running flow on SPCS, KEY_PAIR when we want to setup access using private key

Snowflake Private Key

The RSA private key used for authentication. The RSA key must be formatted according to PKCS8 standards and have standard PEM headers and footers. Note that either Snowflake Private Key File or Snowflake Private Key must be defined

Snowflake Private Key File

The file that contains the RSA Private Key used for authentication to Snowflake, formatted according to PKCS8 standards and having standard PEM headers and footers. The header line starts with -----BEGIN PRIVATE. Select the Reference asset checkbox to upload the private key file.

Snowflake Private Key Password

The password associated with the Snowflake Private Key File

Snowflake Role

Snowflake Role used during query execution

Snowflake Username

User name used to connect to Snowflake instance

Snowflake Warehouse

Snowflake warehouse used to run queries

Linkedin Ads Ingestion Parameters¶

The following table lists parameters that are not inherited from the other parameter contexts:

Parameter

Description

Report Name

The unique name of the report. It is uppercased and used as the destination table name.

Start Date

Time granularity of results. Possible values:

  • ALL: Results grouped into a single result across the entire time range of the report.

  • DAILY: Results grouped by day.

  • MONTHLY: Results grouped by month.

  • YEARLY: Results grouped by year.

Conversion Window

The timeframe for which data is refreshed during incremental load when DAILY time granularity is chosen. For example, if the conversion window is equal to 30 days, then during the INCREMENTAL load, the ingestion starts from the date of the last successful ingestion minus 30 days.

It must be specified only when DAILY time granularity is chosen. For other possible time granularities, such as ALL, MONTHLY, and YEARLY, the SNAPSHOT ingestion strategy is used. Data from the start date to the present is always downloaded, so there is no need to use a conversion window.

The conversion window can be any number from 1 to 365.

Metrics

List of comma-separated metrics. Metrics are case-sensitive. For more information, see Reporting.

The pivotValues and dateRange metrics are mandatory and are automatically included by the connector.

Up to 20 metrics can be specified, including the mandatory metrics.

Pivots

List of comma-separated pivots. The available pivots are as follows:

The connector uses the Analytics Finder when zero or one pivot is specified, and switches to the Statistics Finder when two or three pivots are selected. You can use a maximum of three pivots.

Shares

List of comma-separated share IDs. This parameter can be used to filter results by share ID.

Campaigns

List of comma-separated campaign IDs. This parameter can be used to filter results by campaign ID.

Campaign Groups

List of comma-separated campaign group IDs. This parameter can be used to filter results by campaign group ID.

Accounts

List of comma-separated account IDs. This parameter can be used to filter results by account ID.

Companies

List of comma-separated company IDs. This parameter can be used to filter results by company ID.

Destination Database

The destination database in which the destination table is created. It must be created by the user.

Destination Schema

The destination schema in which the destination table is created. It must be created by the user.

Note

You must specify at least one of the filters, that is shares, campaigns, campaign groups, accounts, or companies.

Run the flow¶

  1. Right-click on the plane and select Enable all Controller Services.

  2. Right-click on the imported process group and select Start.

    The connector starts the data ingestion.