Set up the Atlassian Jira Cloud (Agile) flow¶
Note
This connector is subject to the Snowflake Connector Terms.
This topic describes the steps to install and configure the Atlassian Jira Cloud (Agile) flow, the agile flow of the Openflow Connector for Jira Cloud. The core flow is documented separately in Set up the Atlassian Jira Cloud (Core) flow.
The agile flow is independent of the core flow. It uses its own API token, parameter contexts, state service, and Snowflake destination configuration. Both flows can write to the same Snowflake database and schema, since they create tables with different names.
Prerequisites¶
Ensure that you have reviewed About Openflow Connector for Jira Cloud.
Ensure that you have Set up Openflow - BYOC or Set up Openflow - Snowflake Deployments.
If using Openflow - Snowflake Deployments, ensure that you’ve reviewed configuring required domains and have granted access to the required domains for the Jira Cloud connector.
Get the credentials¶
As a Jira Cloud administrator, perform the following tasks in your Atlassian account. You can reuse the API token from the core flow or create a separate token. The core flow and agile flow can use the same token, but they always share the underlying Jira API rate budget regardless.
Navigate to the API tokens page.
Select Create API token with scopes.
In the Create an API token dialog box, provide a descriptive name for the API token and select an expiration date for the API token. This can range from 1 to 365 days.
Select the API token app Jira.
Select the agile scopes listed in Required API scopes.
Select Create token.
In the Copy your API token dialog box, select Copy to copy your generated API token and then paste the token to the connector parameters, or save it securely.
Select Close to close the dialog box.
Required API scopes¶
The agile flow always requires the following baseline Jira API scopes:
read:board-scope:jira-software,read:board-scope.admin:jira-software,read:project:jira(covers the always-createdBOARDtable)read:jira-user(covers the connection verification that runs at startup againstGET /rest/api/3/myself)
The API token owner additionally needs the Browse projects Jira permission on every project whose boards you want to ingest, as well as access to each board’s saved filter (used when reading board configuration).
Some optional tables require additional scopes on top of the baseline:
Table (Enabled Tables value) |
Additional Jira API scope |
Notes |
|---|---|---|
|
|
No additional permission required. |
|
None. |
No additional permission required. |
|
|
Issues that fail per-issue permission checks (for example, issue-level security) are skipped silently. |
If you reuse a single API token across both flows, combine these scopes with the core flow scopes documented in Required API scopes.
Tokens without scopes are also supported and grant access based solely on the API token owner’s permissions. However, tokens with scopes are recommended for fine-grained access control.
Set up Snowflake account¶
If you’ve already completed the Snowflake account setup for the core flow, you can reuse the same role, service user, key pair, database, schema, and warehouse for the agile flow. The agile flow parameters point at this same Snowflake configuration.
Otherwise, perform the following tasks:
As a Snowflake account administrator, perform the following tasks:
Create a new role or use an existing role.
Create a new Snowflake service user with the type as SERVICE.
Grant the Snowflake service user the role you created in the previous steps.
Configure with key-pair auth for the Snowflake SERVICE user from step 2.
Configure a secrets manager supported by Openflow (recommended), for example, AWS, Azure, and HashiCorp, and store the public and private keys in the secret store.
Note
If for any reason, you don’t want to use a secrets manager, then you are responsible for safeguarding the public key and private key files used for key-pair authentication according to the security policies of your organization.
After the secrets manager is configured, determine how you will authenticate to it. On AWS, use the EC2 instance role associated with Openflow as this way no other secrets have to be persisted.
In Openflow, configure a Parameter Provider associated with this Secrets Manager, from the main menu (⋮) in the upper-right corner. Navigate to Controller Settings » Parameter Provider and then fetch your parameter values.
At this point, all credentials can be referenced with the associated parameter paths and no sensitive values need to be persisted within Openflow.
If any other Snowflake users require access to the tables ingested by the connector (for example, for custom processing in Snowflake), then grant those users the role created in step 1.
Create a database and schema in Snowflake for the connector to store ingested data. Grant the following Database privileges to the role created in the first step.
Create a warehouse that the connector will use or use an existing one. Start with the smallest warehouse size, then experiment with size depending on the amount of data transferred. Large data volumes typically scale better with multi-cluster warehouses, rather than larger warehouse sizes.
Ensure that the user with the role used by the connector has the required privileges to use the warehouse. If that’s not the case then grant the required privileges to the role.
Set up the connector¶
The agile flow is shipped as the Atlassian Jira Cloud (Agile) process group. As a data engineer, perform the following tasks to install and configure it.
Install the connector¶
To install the connector, do the following as a data engineer:
Navigate to the Openflow overview page. In the Featured connectors section, select View more connectors.
On the Openflow connectors page, find the connector and select Add to runtime.
In the Select runtime dialog, select your runtime from the Available runtimes drop-down list and click Add.
Note
Before you install the connector, ensure that you have created a database and schema in Snowflake for the connector to store ingested data.
Authenticate to the deployment with your Snowflake account credentials and select Allow when prompted to allow the runtime application to access your Snowflake account. The connector installation process takes a few minutes to complete.
Authenticate to the runtime with your Snowflake account credentials.
The Openflow canvas appears with the connector process group added to it.
After import, the agile flow appears on the canvas as the Atlassian Jira Cloud (Agile) process group.
Configure the connector¶
Right-click on the imported Atlassian Jira Cloud (Agile) process group and select Parameters.
Populate the required parameter values as described in Flow parameters.
Flow parameters¶
The agile flow uses its own separate parameter contexts. The Jira credentials and Snowflake destination must be configured independently from the core flow. Both flows can point to the same Snowflake destination database and schema.
Jira Cloud (Agile) Source Parameters: Used to establish connection with the Jira API.
Jira Cloud (Agile) Destination Parameters: Used to establish connection with Snowflake.
Jira Cloud (Agile) Ingestion Parameters: Used to define the configuration of data ingested from Jira.
Jira Cloud (Agile) Source Parameters¶
Parameter |
Description |
|---|---|
Jira Email |
Email address for the Atlassian account used for authentication. |
Jira API Token |
API access token for your Atlassian Jira account. See Required API scopes for the scopes to configure. |
Environment URL |
URL to the Atlassian Jira environment. For example, |
Jira Cloud (Agile) Destination Parameters¶
Parameter |
Description |
Required |
|---|---|---|
Destination Database |
The database where data will be persisted. It must already exist in Snowflake. The name is case-sensitive. For unquoted identifiers, provide the name in uppercase. |
Yes |
Destination Schema |
The schema where data will be persisted, which must already exist in Snowflake. The name is case-sensitive. For unquoted identifiers, provide the name in uppercase. See the following examples:
|
Yes |
Snowflake Authentication Strategy |
When using:
|
Yes |
Snowflake Account Identifier |
When using:
|
Yes |
Snowflake Private Key |
When using:
|
No |
Snowflake Private Key File |
When using:
|
No |
Snowflake Private Key Password |
When using
|
No |
Snowflake Role |
When using
|
Yes |
Snowflake Username |
When using
|
Yes |
Oversized Value Strategy |
Determines how the connector handles values that exceed its internal size limits (16 MB) during replication. Possible values are:
|
No |
Snowflake Warehouse |
Snowflake warehouse used to run queries. |
Yes |
Jira Cloud (Agile) Ingestion Parameters¶
Parameter |
Description |
|---|---|
Enabled Tables |
Comma-separated list of optional tables to populate. Ingestion of
Default value: |
Merge Interval |
Time interval between journal-to-destination merge operations. When a merge runs, the Snowflake
warehouse resumes. The merge is skipped if no new data has been loaded since the previous merge.
Default value: |
Run the flow¶
Right-click on the canvas and select Enable all Controller Services.
Right-click on the Atlassian Jira Cloud (Agile) process group and select Start. The flow starts the data ingestion.
On first run, the flow creates the required Snowflake tables in the destination schema. See Destination tables for the full list of tables created by the agile flow and the parameters that control which optional tables are populated.
Resetting the connector state¶
If you want to restart the ingestion from scratch, clear the agile flow’s ingestion state. The agile flow uses its own centralized state service rather than per-processor state.
To reset the state, perform the following steps:
Right-click the Atlassian Jira Cloud (Agile) process group and select Stop.
Navigate to the Controller Settings for the process group.
Find the StandardJiraIngestionStateService controller service and select View State.
Select Clear State. This clears the agile flow’s ingestion tracking.
Optionally, update the connector parameters if needed.
Right-click the Atlassian Jira Cloud (Agile) process group and select Start.
Note
The agile flow’s destination tables (BOARD, SPRINT, BOARD_SPRINT, BOARD_PROJECT,
BOARD_ISSUE) are fully refreshed on every scheduled run, regardless of whether you clear the
state.
Next steps¶
Set up the Atlassian Jira Cloud (Core) flow if you haven’t yet installed the core flow.
Migrate from the legacy Openflow Connector for Jira Cloud if you’re moving from a previous version of the Jira Cloud connector.