Set up the Openflow Connector for Amazon Ads¶

Note

The connector is subject to the Connector Terms.

This topic describes the steps to set up the Openflow Connector for Amazon Ads.

Prerequisites¶

  1. Ensure that you have reviewed About Openflow Connector for Amazon Ads.

  2. Ensure that you have set up Openflow.

Get the credentials¶

As an Amazon Ads administrator, perform the following actions:

  1. Make sure that you have access to an Amazon Ads account.

  2. Acquire Access to Amazon Ads API and complete the onboarding process.

  3. Get client ID and client secret.

  4. Create an authorization grant and retrieve a refresh token.

  5. Review the available regions and get a base URL used for requests based on the region in which you are advertising.

  6. Fetch profile IDs for report configuration.

Set up Snowflake account¶

As a Snowflake account administrator, perform the following tasks:

  1. Create a new role or use an existing role and grant the Database privileges.

  2. Create a new Snowflake service user with the type as SERVICE.

  3. Grant the Snowflake service user the role you created in the previous steps.

  4. Configure with key-pair auth for the Snowflake SERVICE user from step 2.

  5. Snowflake strongly recommends this step. Configure a secrets manager supported by Openflow, for example, AWS, Azure, and Hashicorp, and store the public and private keys in the secret store.

    Note

    If for any reason, you do not wish to use a secrets manager, then you are responsible for safeguarding the public key and private key files used for key-pair authentication according to the security policies of your organization.

    1. Once the secrets manager is configured, determine how you will authenticate to it. On AWS, it’s recommended that you the EC2 instance role associated with Openflow as this way no other secrets have to be persisted.

    2. In Openflow, configure a Parameter Provider associated with this Secrets Manager, from the hamburger menu in the upper right. Navigate to Controller Settings » Parameter Provider and then fetch your parameter values.

    3. At this point all credentials can be referenced with the associated parameter paths and no sensitive values need to be persisted within Openflow.

  6. If any other Snowflake users require access to the raw ingested documents and tables ingested by the connector (for example, for custom processing in Snowflake), then grant those users the role created in step 1.

  7. Designate a warehouse for the connector to use. Start with the smallest warehouse size, then experiment with size depending on the number of tables being replicated, and the amount of data transferred. Large table numbers typically scale better with multi-cluster warehouses, rather than larger warehouse sizes.

Set up the connector¶

As a data engineer, perform the following tasks to install and configure the connector:

Install the connector¶

  1. Navigate to the Openflow Overview page. In the Featured connectors section, select View more connectors.

  2. On the Openflow connectors page, find the connector and select Add to runtime.

  3. In the Select runtime dialog, select your runtime from the Available runtimes drop-down list.

  4. Select Add.

    Note

    Before you install the connector, ensure that you have created a database and schema in Snowflake for the connector to store ingested data.

  5. Authenticate to the deployment with your Snowflake account credentials and select Allow when prompted to allow the runtime application to access your Snowflake account. The connector installation process takes a few minutes to complete.

  6. Authenticate to the runtime with your Snowflake account credentials.

The Openflow canvas appears with the connector process group added to it.

Configure the connector¶

  1. Right-click on the imported process group and select Parameters.

  2. Populate the required parameter values as described in Flow parameters.

Flow parameters¶

This section describes the flow parameters that you can configure based on the following parameter contexts:

Amazon Ads Source Parameters¶

Parameter

Description

Client ID

Client ID of the Amazon Advertising account

Client Secret

Client secret of the Amazon Advertising account

OAuth Base URL

The URL of the authorization server that issues the access token

Possible values:

Refresh Token

Refresh Token for Amazon Ads API

Region

Environment from which the advertising data is downloaded

Possible values:
  • NA

  • EU

  • FE

Amazon Ads Destination Parameters¶

Parameter

Description

Destination Database

The database where data will be persisted. It must already exist in Snowflake

Destination Schema

The schema where data will be persisted. It must already exist in Snowflake

Snowflake Account Identifier

Snowflake account name formatted as [organization-name]-[account-name] where data will be persisted

Snowflake Authentication Strategy

Strategy of authentication to Snowflake. Possible values: SNOWFLAKE_SESSION_TOKEN - when we are running flow on SPCS, KEY_PAIR when we want to setup access using private key

Snowflake Private Key

The RSA private key used for authentication. The RSA key must be formatted according to PKCS8 standards and have standard PEM headers and footers. Note that either Snowflake Private Key File or Snowflake Private Key must be defined

Snowflake Private Key File

The file that contains the RSA Private Key used for authentication to Snowflake, formatted according to PKCS8 standards and having standard PEM headers and footers. The header line starts with -----BEGIN PRIVATE. Select the Reference asset checkbox to upload the private key file.

Snowflake Private Key Password

The password associated with the Snowflake Private Key File

Snowflake Role

Snowflake Role used during query execution

Snowflake Username

User name used to connect to Snowflake instance

Snowflake Warehouse

Snowflake warehouse used to run queries

Amazon Ads Ingestion Parameters¶

Parameter

Description

Report Name

Name of the report to be used as a destination table name. The name must be unique within the destination schema.

Report Ad Product

Type of advertising product being reported

Possible values:
  • SPONSORED_PRODUCTS

  • SPONSORED_BRANDS

  • SPONSORED_DISPLAY

  • SPONSORED_TELEVISION

  • DEMAND_SIDE_PLATFORM

Report Columns

Set of columns which will be present in the end report

Report Filters

Set of filters used to trim the data returned

Report Group By

Level of granularity for the report

Report Ingestion Strategy

Mode in which data is fetched, either snapshot or incremental

Possible values:
  • SNAPSHOT

  • INCREMENTAL

Report Ingestion Window

Specifies the number of days, data from which should be downloaded during incremental ingestion

Report Profile ID

The profile ID associated with an advertising account in a specific marketplace

Report Time Unit

Date aggregation

Possible values:
  • DAILY: Each day is represented by a one row

  • SUMMARY: The whole ingested date period is represented as one row

Report Type

Data type contained in the report

Report Start Date

Start date from which the ingestion should happen. The date format is YYYY-MM-DD.

Report Schedule

Schedule time for processor creating reports

Run the flow¶

  1. Right-click on the plane and select Enable all Controller Services.

  2. Right-click on the imported process group and select Start. The connector starts the data ingestion.