Set up the Openflow Connector for Jira Cloud¶

Note

The connector is subject to the Connector Terms.

This topic describes the steps to set up the Openflow Connector for Jira Cloud.

Prerequisites¶

  1. Ensure that you have reviewed About Openflow Connector for Jira Cloud.

  2. Ensure that you have set up a Openflow.

Get the credentials¶

As a Jira Cloud administrator, perform the following tasks in your Atlassian account:

  1. Navigate to the API tokens page.

  2. Select Create API token with scopes.

  3. In the Create an API token dialog box, provide a descriptive name for the API token and select an expiration date for the API token. This can range from 1 to 365 days.

  4. Select the Api token app Jira.

  5. Select jira scopes read:jira-work and read:jira-user.

  6. Select Create token.

  7. In the Copy your API token dialog box, select Copy to copy your generated API token and then paste the token to the connector parameters, or save it securely.

  8. Select Close to close the dialog box.

Set up Snowflake account¶

As a Snowflake account administrator, perform the following tasks:

  1. Create a new role or use an existing role.

  2. Create a new Snowflake service user with the type as SERVICE.

  3. Grant the Snowflake service user the role you created in the previous steps.

  4. Configure with key-pair auth for the Snowflake SERVICE user from step 2.

  5. Snowflake strongly recommends this step. Configure a secrets manager supported by Openflow, for example, AWS, Azure, and Hashicorp, and store the public and private keys in the secret store.

    Note

    If for any reason, you do not wish to use a secrets manager, then you are responsible for safeguarding the public key and private key files used for key-pair authentication according to the security policies of your organization.

    1. Once the secrets manager is configured, determine how you will authenticate to it. On AWS, it’s recommended that you the EC2 instance role associated with Openflow as this way no other secrets have to be persisted.

    2. In Openflow, configure a Parameter Provider associated with this Secrets Manager, from the hamburger menu in the upper right. Navigate to Controller Settings » Parameter Provider and then fetch your parameter values.

    3. At this point all credentials can be referenced with the associated parameter paths and no sensitive values need to be persisted within Openflow.

  6. If any other Snowflake users require access to the raw ingested documents and tables ingested by the connector (for example, for custom processing in Snowflake), then grant those users the role created in step 1.

  7. Create a database and schema in Snowflake for the connector to store ingested data. Grant the following Database privileges to the role created in the first step.

    CREATE DATABASE jira_destination_db;
    CREATE SCHEMA jira_destination_db.jira_destination_schema;
    GRANT USAGE ON DATABASE jira_destination_db TO ROLE <jira_connector_role>;
    GRANT USAGE ON SCHEMA jira_destination_db.jira_destination_schema TO ROLE <jira_connector_role>;
    GRANT CREATE TABLE ON SCHEMA jira_destination_db.jira_destination_schema TO ROLE <jira_connector_role>;
    
    Copy
  8. Create a warehouse that will be used by the connector or use an existing one. Start with the smallest warehouse size, then experiment with size depending on the number of tables being replicated, and the amount of data transferred. Large table numbers typically scale better with multi-cluster warehouses, rather than larger warehouse sizes.

  9. Ensure that the user with role used by the connector has the required privileges to use the warehouse. If that’s not the case then grant the required privileges to the role.

    CREATE WAREHOUSE jira_connector_warehouse WITH WAREHOUSE_SIZE = 'X-Small';
    GRANT USAGE ON WAREHOUSE jira_connector_warehouse TO ROLE <jira_connector_role>;
    
    Copy

Set up the connector¶

As a data engineer, perform the following tasks to install and configure the connector:

Install the connector¶

  1. Navigate to the Openflow Overview page. In the Featured connectors section, select View more connectors.

  2. On the Openflow connectors page, find the connector and select Add to runtime.

  3. In the Select runtime dialog, select your runtime from the Available runtimes drop-down list.

  4. Select Add.

    Note

    Before you install the connector, ensure that you have created a database and schema in Snowflake for the connector to store ingested data.

  5. Authenticate to the deployment with your Snowflake account credentials and select Allow when prompted to allow the runtime application to access your Snowflake account. The connector installation process takes a few minutes to complete.

  6. Authenticate to the runtime with your Snowflake account credentials.

The Openflow canvas appears with the connector process group added to it.

Configure the connector¶

  1. Right-click on the imported process group and select Parameters.

  2. Populate the required parameter values as described in Flow parameters.

Flow parameters¶

This section describes the flow parameters that you can configure based on the following parameter contexts:

Jira Cloud Source Parameters¶

Parameter

Description

Authorization Method

Authorization method for Jira Cloud API. Default value: BASIC.

Jira Email

Email address for the Atlassian account. Visible only when Authorization Method is BASIC.

Jira API Token

API access token for your Atlassian Jira account. Visible only when Authorization Method is BASIC.

Environment URL

URL to the Atlassian Jira environment

Jira Cloud Destination Parameters¶

Parameter

Description

Destination Database

The database where data will be persisted. It must already exist in Snowflake.

Destination Schema

The schema where data will be persisted. It must already exist in Snowflake.

Snowflake Account Identifier

Snowflake account name formatted as [organization-name]-[account-name] where data will be persisted

Snowflake Authentication Strategy

Strategy of authentication to Snowflake. Possible values: SNOWFLAKE_SESSION_TOKEN, when you are running the flow on SPCS and KEY_PAIR, when you want to set up access using a private key.

Snowflake Private Key

The RSA private key used for authentication. The RSA key must be formatted according to PKCS8 standards and have standard PEM headers and footers. Note that either Snowflake Private Key File or Snowflake Private Key must be defined.

Snowflake Private Key File

The file that contains the RSA Private Key used for authentication to Snowflake, formatted according to PKCS8 standards and having standard PEM headers and footers. The header line starts with -----BEGIN PRIVATE. Select the Reference asset checkbox to upload the private key file.

Snowflake Private Key Password

The password associated with the Snowflake Private Key File

Snowflake Role

Snowflake Role used during query execution

Snowflake Username

User name used to connect to Snowflake instance

Snowflake Warehouse

Snowflake warehouse used to run queries

Jira Cloud Ingestion Parameters¶

Parameter

Description

Search Type

Type of search to perform. It has one of these possible values SIMPLE and JQL. Default value: SIMPLE.

JQL Query

A JQL query. It should be used only when Search Type is JQL.

Project Name

You can search for issues belonging to a particular project by project name, project key, or project ID. It should be used only when Search Type is SIMPLE.

Status Category

Status category filter for simple search. It should be used only when Search Type is SIMPLE. Example values are: Done, In Progress, To Do.

Updated After

Filter issues updated after a specified date and time. It should be used only when Search Type is SIMPLE. It should be in the yyyy-MM-dd format, such as 2023-10-01.

Created After

Filter issues created after a specified date and time. It should be used only when Search Type is SIMPLE. It should be in the yyyy-MM-dd format, such as 2023-10-01.

Issue Fields

A list of fields to return for each issue, which is used to retrieve a subset of fields. This parameter accepts a comma-separated list. Default value: all.

Maximum Page Size

The maximum number of items to return per page. Default value: 200.

Run the flow¶

  1. Right-click on the plane and select Enable all Controller Services.

  2. Right-click on the imported process group and select Start. The connector starts the data ingestion.

If you need to change the issue query criteria or want to restart the ingestion from scratch, perform the following steps to ensure that the data in the destination table is consistent:

  1. Right-click on the FetchJiraIssues processor and stop it.

  2. Right-click on the FetchJiraIssues processor and then select View State.

  3. In the State dialog box, select Clear State. This action clears the state of the processor and allows it to fetch all issues again.

  4. Optional: If you want to change the issue query criteria, right-click on the imported process group and select Parameters. Update the parameters as needed.

  5. Optional: If you want to change the destination table name, right-click on the imported process group and select Parameters. Update the Destination Table parameter.

  6. Right-click on the FetchJiraIssues processor and select Start. The connector starts the data ingestion.