Snowflake Migration Plugin

The Snowflake Migration Plugin is an AI-powered plugin for Cortex Code that guides you through an end-to-end database migration to Snowflake. It provides a conversational, interactive workflow — from connecting to your source database through code conversion, deployment, data migration, and validation.

The plugin is organized as a skill tree: a hierarchy of AI instructions that detect where you are in the migration lifecycle and route you to the right action automatically. You can follow the prescribed path from start to finish, or jump directly to any capability at any time.

Supported sources: SQL Server, Amazon Redshift


Download

Download the latest release from:

https://snowconvert.snowflake.com/storage/linux/beta/plugins/migration-plugin-pr.zip

Prerequisites

Before starting a migration, ensure you have:

  • A Snowflake account with a connection configured in ~/.snowflake/connections.toml.

  • A source database (SQL Server or Redshift) accessible from your machine.

  • Cortex Code installed.

All other dependencies (uv, scai, Python packages) are installed automatically on first launch.


Quick Start

1. Launch Cortex Code with the plugin:

cortex --plugin-dir /path/to/migration-plugin/

2. Start migrating:

Migrate my database to Snowflake

The agent reads your project state, shows a progress checklist, and picks up where you left off. If no project exists, it starts from the beginning.

Tip: You can also combine the plugin with a profile:

cortex --plugin-dir /path/to/migration-plugin/ --profile <your-profile>

Migration Workflow

The plugin guides you through six stages. At every session start, the agent detects your current progress and resumes from where you left off.

Stage

Name

What happens

1

Connect

Set up a connection to your source database (SQL Server or Redshift)

2

Init

Create a local migration project

3

Register

Extract DDL and code from the source database, or import local .sql files

4

Convert

Translate source SQL to Snowflake-compatible SQL via SnowConvert

5

Assess

Generate an interactive assessment report covering deployment waves, object exclusions, dynamic SQL patterns, and ETL/SSIS analysis

6

Migrate

Deploy objects, migrate data, validate output, and fix errors — wave by wave

Each time you start a session, the agent presents a live progress checklist:

✅  1. Connect             — Connected to SQL Server
✅  2. Init                — Project initialized
✅  3. Register            — 342 objects registered
◐   4. Initial Conv        — 280/342 converted
⬚   5. Assess              — Not run
⬚   6. Migrate Objects     — 0/120 tables deployed

Skills Reference

The plugin is organized as a skill tree. The root skill detects your project state and delegates to the right sub-skill. You can also invoke any skill directly by describing what you want.

Setup Skills (Stages 1–5)

connection

Walks you through connecting to your source database. The agent collects credentials, tests the connection, and saves it for reuse across sessions. Supports:

  • SQL Server — configures ODBC driver, host, port, and authentication.

  • Amazon Redshift — configures host, port, database, and IAM or password authentication.

register-code-units

Gets source code into the migration project. Two paths are available:

Path

When to use

Extract from database

You have a live source connection and want the agent to pull DDL and object code directly

Import local files

You already have .sql files on disk and want to import them into the project

convert

Runs SnowConvert to translate your source SQL (T-SQL or Redshift SQL) into Snowflake-compatible SQL. After conversion, the agent presents:

  • Total objects converted successfully.

  • EWI (Early Warning Issue) summary broken down by severity (errors, warnings, informational).

  • A list of objects that require manual review.

assessment

Generates an interactive multi-tab HTML report. The assessment includes four analyses that can be run individually or together:

Analysis

What it does

Deployment Waves

Analyzes object dependencies to produce an ordered deployment sequence. Objects within a wave have no inter-dependencies; waves are ordered so dependencies are always deployed first.

Object Exclusion

Identifies objects that do not need migration: temporary tables, staging objects, deprecated objects, and test artifacts. Reduces scope before deployment.

Dynamic SQL Analysis

Classifies and scores Dynamic SQL patterns in your converted code. Identifies patterns that Snowflake handles natively, patterns requiring manual rewrite, and patterns with elevated migration complexity.

ETL/SSIS Assessment

Analyzes SSIS packages individually: classifies each package (Ingestion, Transformation, Export, Orchestration, Hybrid), maps control and data flow, and estimates migration effort.

The report is generated as a single self-contained HTML file. You can iterate on the wave plan interactively — for example, reprioritizing objects, adjusting wave sizes, or relocating specific objects — before locking it for deployment.

Migration Skills (Stage 6)

migrate-objects

The main deploy loop. Processes all objects in the current wave in dependency order:

Object type

What happens

Tables

Deployed to Snowflake, then data is migrated from the source.

Views

Deployed to Snowflake. Blocked views retry after their dependent functions/procedures pass.

Functions & Procedures

Deployed, tested against source output, and fixed if tests fail. The loop repeats until tests pass or the user decides to skip.

After each wave completes, the agent automatically advances to the next wave.

baseline-capture

Captures the expected output of source stored procedures and functions for use as test baselines. Two approaches are supported:

Approach

When to use

Query Logs

You have CSV logs of real EXEC or CALL statements from your source system. The agent parses these to extract parameters and expected outputs.

AI-Assisted

No logs are available. A swarm of specialized agents generates test cases covering business logic, data-driven scenarios, and edge cases by analyzing the source SQL.

Baselines are stored locally and uploaded to Snowflake so they can be used for two-sided validation (source output vs. Snowflake output) during the migrate-objects loop.

rule-engine

Manages reusable migration rules stored in Snowflake. Rules encode known source-to-Snowflake fix patterns and are shared across all objects in the project. Each rule can operate in two modes:

Mode

How it works

Regex

A regex find-and-replace applied mechanically to SQL files

AI-guided

The rule provides context and strategy; the AI interprets and applies it

The rule engine has four sub-capabilities:

Sub-skill

What it does

search

Scans a SQL file against all rules using regex pattern matching and Cortex semantic search. Returns matched rules ranked by relevance.

apply

Applies matched rules to local SQL files. Regex rules are applied automatically; AI-mode rules are shown for review before applying. Supports single-file and batch application.

extract

Creates a new reusable rule from a fix you just made. Works from an interactive before/after comparison or retroactively from git history.

propagate

Given a rule, finds every code unit in the project it applies to (via reverse regex + semantic search), then hands off to batch apply.

Rules accumulate over the lifetime of the project. Every time the agent fixes an object and extracts a rule, that rule becomes available to all subsequent objects — reducing manual effort as the migration progresses.


What You Can Ask

You do not need to follow the prescribed path. You can ask for any capability at any time.

Status and navigation

Prompt

What happens

"What is the current state?"

Shows the progress checklist

"What should I work on next?"

Returns the next dependency-ready object

"Continue"

Picks up the prescribed migration path

Setup

"Connect to my SQL Server database"
"Extract objects from the source"
"Import SQL files from ./my-scripts/"
"Convert my source code"

Assessment

"Run a full assessment"
"Generate deployment waves"
"I want a maximum of 30 objects per wave"
"Prioritize all Payroll objects in Wave 1"
"Identify temporary and staging objects"
"Analyze dynamic SQL patterns"
"Assess my SSIS packages"

Migration

"Deploy tables"
"Migrate data"
"Deploy and test the next function"
"Capture baselines for dbo.GetCustomerOrders"

Rule engine

"Search rules for this file"
"Apply all matched rules"
"Extract a rule from my last fix"
"Propagate this rule across the project"
"Show me all rules"

Example Workload

You can use AdventureWorksDW as an example source database to try the plugin end-to-end. Substitute any SQL Server or Redshift database you have access to — the plugin adapts to whatever source you connect.


Troubleshooting

Problem

Resolution

Plugin fails on startup

Check migration-plugin/logs/install-dependencies.log. The install hook requires Homebrew (macOS/Linux) or winget (Windows).

scai not found

Run the install hook manually: migration-plugin/hooks/install-dependencies. Or install directly: brew install --cask snowflakedb/snowconvert-ai/snowconvert-ai.

Snowflake connection errors

Verify your connection in ~/.snowflake/connections.toml and confirm the connection name matches what you provided during setup.

Agent seems lost

Say "What is the current state?" — the agent re-reads project status and resets context.


Support

For help with the migration plugin, contact: snowconvert-support@snowflake.com