Migrate from the legacy Openflow Connector for Jira Cloud¶
Note
This connector is subject to the Snowflake Connector Terms.
This topic describes how to migrate from the legacy Openflow Connector for Jira Cloud to the new Openflow Connector for Jira Cloud.
Overview¶
The new connector is a complete rewrite that changes how data is stored in Snowflake. It consists of two separate flows: a core flow (issues, projects, comments, changelogs, worklogs, users, votes, watchers, remote links, issue security schemes, and optionally deleted issues) and an agile flow (boards, sprints, board mappings).
The core flow and agile flow can write to the same Snowflake destination schema, since they create tables with different names. The legacy connector and the new connector can run side by side during migration — including on the same Openflow runtime — as long as they write to separate destination schemas, so you can validate the new output before decommissioning the legacy connector.
Feature comparison¶
Aspect |
Legacy connector |
New connector |
|---|---|---|
Entities |
Issues only (with optional worklog enrichment). |
Core flow: issues, projects, users, comments, changelogs, worklogs, votes, watchers, remote links, security schemes, permissions, project components, project versions, user groups, deleted issues. Agile flow: boards, sprints, board-sprint, board-project, board-issue mappings. |
Concurrency |
Single-threaded. |
Parallel per-project issue fetching, with optional multi-node distribution. |
Schema strategy |
Raw JSON in an |
Explicit column schemas per entity, evolved additively from the API responses. |
Deletion tracking |
Not supported. |
Tracks deleted issues via Jira audit log polling (optional). |
Agile data |
Not supported. |
Available through a separate agile flow. |
Key differences¶
Schema changes¶
The most significant difference is how data is stored in Snowflake:
Aspect |
Legacy connector |
New connector |
|---|---|---|
Issues table |
Single table with an |
Explicit columns per field. Column names are derived from Jira field display names. No raw JSON fallback. |
Other entities |
Not available. Comments and worklogs are embedded in the issue JSON. |
Separate tables: |
Views |
Auto-generated |
No views created. Data is directly queryable from the destination tables. |
Any queries that reference the legacy ISSUE column (for example, SELECT issue:fields:summary) or the
auto-generated _VIEW must be rewritten to use the new column names directly (for example, SELECT SUMMARY).
Parameter changes¶
The following parameters from the legacy connector are not available in the new connector:
Legacy parameter |
Current equivalent |
|---|---|
Search Type |
Removed. The new connector always fetches all issues from discovered projects. Use
|
JQL Query |
Removed. The new connector doesn’t support arbitrary JQL for issue filtering. Use
|
Project Names |
Replaced by |
Status Category |
Removed. The new connector fetches all issues regardless of status. |
Updated After |
Removed. The new connector manages incremental state automatically. |
Created After |
Removed. The new connector manages incremental state automatically. |
Destination Table |
Removed. The new connector creates fixed table names per entity ( |
Fetch All Worklogs |
Removed. The new connector fetches all worklogs into a separate |
Connection Method |
Not exposed as a parameter. The new connector uses the |
The following parameters are introduced in the new connector:
Parameter |
Description |
|---|---|
Deletes Fetch Strategy |
Enables tracking of deleted issues via the Jira audit log. Not available in the legacy connector. |
Merge Interval |
Time interval between journal-to-destination merge operations. Available in both the core flow and the agile flow. |
Additionally, agile data (boards, sprints, and board mappings) is now available through a separate agile flow rather than a parameter toggle. See Set up the Atlassian Jira Cloud (Agile) flow for details on installing and configuring the agile flow.
API token scopes¶
If you’re using API tokens with scopes, the new connector may require additional scopes depending on the features you enable. See Required API scopes for the core flow scopes and Required API scopes for the agile flow scopes.
Snowflake privileges¶
The new connector requires only CREATE TABLE on the destination schema. The legacy
connector additionally required CREATE VIEW to create flattened issue views. The new
connector doesn’t create views, so the CREATE VIEW privilege is no longer needed. If you’re
reusing an existing role, you can revoke CREATE VIEW after the legacy connector is
decommissioned.
Migration steps¶
Set up the new connector. Install the core flow on the same or a different Openflow runtime. If you need agile data, also install the agile flow. Configure both flows to write to a different destination schema than the one used by the legacy connector. This allows the legacy and new connectors to run simultaneously.
Map your legacy configuration to the new parameters.
Copy the
Jira Email,Jira API Token, andEnvironment URLvalues from the legacy connector to the new core flow. If using the agile flow, configure these values separately for that flow as well.If the legacy connector uses
Project Names, convert them to project keys for theProject Keys Filterparameter.If the legacy connector uses a
JQL Query, evaluate whetherProject Keys Filtercovers your use case. If your JQL filters by criteria other than project (for example, status or custom fields), those filters aren’t available in the new connector. All matching issues from the configured projects are ingested.Set
Issue Fieldsto match your previous configuration. The default changed from*all(legacy) to*standard.Configure the Snowflake destination parameters (database, schema, warehouse, credentials) for each flow.
Start the new connector. Run the core flow and allow the initial load to complete. If using the agile flow, start it as well.
Validate the data. Compare the data in the new destination tables against the legacy destination table to check for completeness. Expect some differences: the legacy connector didn’t track deletes, so issues that were deleted in Jira still appear in the legacy table but not in the new
ISSUEtable (or they appear with_SNOWFLAKE_DELETED = TRUEif delete tracking is enabled). Row counts will not match exactly when any issues have been deleted.Update downstream queries. Rewrite any queries, views, dashboards, or pipelines that reference the legacy table structure. Key changes:
Replace references to the legacy
ISSUEOBJECTcolumn or_VIEWwith direct column references.Replace
FLATTEN-based queries with standardSELECTstatements.Add
JOINstatements to combine data across the new entity tables (for example, joinISSUEwithCOMMENTonISSUE_ID).If you want queries to ignore deleted issues, filter on the new
_SNOWFLAKE_DELETEDcolumn (WHERE _SNOWFLAKE_DELETED = FALSE). The legacy connector didn’t track deletes at all, so legacy queries againstJIRA_ISSUESreturned issues that had since been removed in Jira.
Stop the legacy connector. Once you’ve confirmed that the new data is complete and downstream consumers have been updated, stop the legacy connector process group. Both new flows (core and agile) can continue running independently.
Clean up. Optionally, drop the legacy destination table and view after confirming they’re no longer needed.
Note
When the legacy connector and the new connector use the same Jira API token, they share the same Jira API rate limits. Running both simultaneously roughly doubles the API call volume, which may cause rate limiting on Jira instances with heavy API usage. Consider reducing the legacy ingestion frequency during the migration period, or run the new connector with a separate API token whose rate budget you can manage independently.