About Openflow Connector for Jira Cloud¶
Note
This connector is subject to the Snowflake Connector Terms.
This topic describes the basic concepts of Openflow Connector for Jira Cloud, its workflow, and limitations.
The Openflow Connector for Jira Cloud ingests data from multiple Atlassian Jira Cloud entities into Snowflake. It consists of two separate flows:
Core flow — uses the Jira Cloud REST API to retrieve issues, projects, comments, changelogs, worklogs, users, deleted issues, votes, watchers, remote links, and issue security schemes.
Agile flow — uses the Jira Agile REST API to retrieve boards, sprints, board-sprint mappings, board-project mappings, and board-issue mappings.
Both flows store data in dedicated Snowflake tables with explicit column schemas. The two flows can write to the same Snowflake destination schema, since they create tables with different names.
Use this connector if you’re looking to do the following:
Centralize Jira data in Snowflake for cross-team visibility and deeper insights into engineering, support, and project workflows
Ingest a broad set of Jira entities into separate, query-ready Snowflake tables, with a selectable subset of optional tables
Extract Jira issues with per-project parallel ingestion for faster data loads
Track deleted issues via Jira audit log polling
Optionally ingest Jira Agile data using the separate agile flow
Note
If you previously deployed an earlier version of the Jira Cloud connector, see Migrate from the legacy Openflow Connector for Jira Cloud for a step-by-step migration guide.
Destination tables¶
The connector creates the following tables in the configured Snowflake destination schema.
Most tables have a fixed column schema defined by the connector. The ISSUE table is the
exception: its columns are driven by the Issue Fields configuration and may include custom
fields from your Jira instance. See Issue fields configuration for details.
Core flow tables¶
The ISSUE, PROJECT, USER, and FIELD tables are always created. The remaining
tables are created only when the corresponding table name is listed in the Enabled Tables
parameter (or, for DELETED_ISSUE, when delete tracking is enabled). See
Jira Cloud (Core) Ingestion Parameters for details.
Table |
Enabled by |
Contents |
|---|---|---|
ISSUE |
Always |
One row per Jira issue. The set of columns is driven by the |
PROJECT |
Always |
One row per Jira project visible to the API token owner. |
USER |
Always |
Jira users encountered during ingestion. |
FIELD |
Always |
Metadata for Jira issue fields used to drive the dynamic |
CHANGELOG |
|
Issue field change history, one row per changelog entry. |
COMMENT |
|
Comments attached to issues, one row per comment. |
ISSUE_REMOTE_LINK |
|
Remote links attached to issues. |
ISSUE_SECURITY_SCHEME |
|
Issue-level security schemes and levels defined in the Jira instance. |
ISSUE_VOTE |
|
Per-issue vote records. |
ISSUE_WATCHER |
|
Per-issue watcher records. |
PERMISSION |
|
Global and project permission definitions. |
PROJECT_COMPONENT |
|
Components defined in each project. |
PROJECT_VERSION |
|
Release versions defined in each project. |
USER_GROUP |
|
Group memberships per user. |
WORKLOG |
|
Time tracking entries on issues. |
DELETED_ISSUE |
|
Issues deleted from Jira, tracked via audit log. |
Agile flow tables¶
The following tables are created by the agile flow. To populate these tables, install and run the agile
flow separately from the core flow. The BOARD table is always created. The remaining tables
are gated by the agile flow’s own Enabled Tables parameter.
Table |
Enabled by |
Contents |
|---|---|---|
BOARD |
Always |
Agile boards visible to the API token owner. |
SPRINT |
|
Sprints across all ingested boards. |
BOARD_SPRINT |
|
Board-to-sprint mappings. |
BOARD_PROJECT |
|
Board-to-project mappings. |
BOARD_ISSUE |
|
Board-to-issue mappings. |
Connector-managed columns¶
In addition to the columns derived from the Jira API response, the connector adds the following
metadata columns. _SNOWFLAKE_INSERTED_AT and _SNOWFLAKE_UPDATED_AT are added to every
destination table. _SNOWFLAKE_DELETED is added only to tables that track soft deletes. To
see which tables have it, inspect the destination tables in Snowflake.
Column |
Type |
Purpose |
|---|---|---|
|
|
When the row was first inserted by the connector. |
|
|
When the row was last updated by the connector. |
|
|
|
Workflow¶
A Jira Cloud administrator performs the following tasks:
Generates an API token within the Jira instance. This token is used by the connector for authentication. Both tokens with scopes and tokens without scopes are supported, although tokens with scopes are recommended for fine-grained access control. The required scopes depend on which features are enabled. See Required API scopes for details.
Optionally, if delete tracking is required, ensures the API token owner has the Administer Jira global permission for access to the audit log endpoint.
A Snowflake account administrator performs the following tasks:
Installs the core flow, the agile flow, or both, depending on which entities are needed.
Configures each flow:
Provides the Jira API token and email address.
Specifies the Jira instance URL.
For the core flow, optionally filters ingestion to specific projects using
Project Keys Filterand configures the issue fields to ingest.Sets the database and schema names in the Snowflake account.
Runs the flow in the Openflow canvas. Upon execution:
The core flow discovers projects and registers them in the ingestion state service, fetches issues in parallel across projects along with the per-issue tables listed in
Enabled Tables(and optionally deleted issues), and fetches worklogs, users, user groups, permissions, project components, project versions, and issue security schemes on independent schedules.The agile flow fetches boards, sprints, board-project mappings, board-sprint mappings, and board-issue mappings.
Snowflake business users can then query the destination tables directly with standard SQL, without needing to flatten JSON.
Openflow requirements¶
The minimum runtime size is
Small. When you have many tables listed inEnabled Tables, more processors run concurrently and the default Small runtime thread budget may become a bottleneck. In that case, move to aMediumruntime (or larger).The connector supports multi-node Openflow runtimes. Each flow’s state service is cluster-aware, and the flow connections use load balancing where appropriate so that work is distributed across available nodes. If you want to run on multiple nodes, configure a static cluster size by setting Min nodes to the target node count rather than relying on autoscaling. The connector doesn’t generate enough sustained load on the runtime to trigger the runtime to scale up additional nodes on its own.
For Jira instances with many projects, a multi-node runtime is recommended. Per-project work is distributed across nodes, so adding nodes increases the number of projects the connector processes in parallel. Use the project count as a rough guide when sizing Min nodes.
The connector is primarily limited by Jira API rate limits rather than runtime compute capacity. Increasing the runtime size beyond
Medium, or adding more nodes than the API rate budget can sustain, is unlikely to improve ingestion speed.The core flow and agile flow can run on the same or separate Openflow runtimes. If you run both flows on the same runtime,
Smallisn’t sufficient — use at leastMedium(or larger, depending on the load).
Limitations¶
Basic authentication using an email and API token is the only supported authorization method. The connector can only ingest data accessible to the owner of the API token.
Delete tracking via the
AUDITstrategy requires the API token owner to have the Administer Jira global permission. The Jira audit log has limited retention (typically 6 months for Jira Premium, less for Free or Standard plans). If the connector is paused for longer than the retention period, delete events can be missed.Schema evolution for the
ISSUEtable is additive only. New columns can be added, but column type changes or removals aren’t supported. If a Jira custom field type changes, the connector may require redeployment.The
ISSUEtable schema is dynamic and depends on theIssue Fieldsconfiguration. Fields not included in the resolved field set aren’t loaded, and there is no raw JSON fallback.Narrowing
Project Keys Filterto remove a project doesn’t delete that project’s rows from the destination tables. Rows that were previously ingested remain in place and are no longer updated. To remove orphaned rows after a filter change, manually delete them from the destination tables.Agile data (boards, sprints, board mappings) is fully re-fetched on every scheduled run of the agile flow. For Jira instances with many boards, this may result in increased API usage.
Each connector instance can be associated with only one Jira Cloud site.
Next steps¶
Set up the Atlassian Jira Cloud (Core) flow to install the core flow.
Set up the Atlassian Jira Cloud (Agile) flow to install the agile flow.
Migrate from the legacy Openflow Connector for Jira Cloud if you’re moving from a previous version of the Jira Cloud connector.