Snowflake Migration Skill

The Snowflake Migration Skill is an AI-powered skill for Cortex Code that guides you through an end-to-end database migration to Snowflake. It provides a conversational, interactive workflow — from connecting to your source database through code conversion, deployment, data migration, and validation.

Migrations take time. A full migration — especially for large workloads with hundreds of objects — is not completed in a single session. The skill automatically tracks your progress at every step. Each time you start a session, it reads your project state and picks up exactly where you left off. There is no need to start over.


Why use the Migration Skill

Cortex Code on its own is a capable coding agent, but a database migration involves coordination across dozens of tools, stages, and hundreds of objects over days or weeks. The Migration Skill adds the structure, automation, and domain expertise that a general-purpose agent doesn’t have.

What the skill gives you

BenefitWhat it means for you
Guided end-to-end workflowYou don’t need to know the right sequence. The skill moves through connection, extraction, conversion, assessment, deployment, data migration, and validation automatically.
Session persistenceClose your terminal and come back tomorrow. The skill picks up exactly where you left off — no repeated setup, no lost progress.
SnowConvert integrationSource SQL is translated deterministically by SnowConvert before AI touches it. You start from a high-quality baseline, not a best-effort LLM rewrite.
Dependency-aware deploymentThe skill analyzes object dependencies and builds deployment waves so objects are deployed in the right order. You don’t manually sort hundreds of tables and views.
Two-sided testingFunctions and procedures are tested against source-side baselines automatically. Failures trigger a fix loop — the agent diagnoses, patches, and re-tests until the output matches.
Reusable fix rulesEvery correction you make can be extracted into a rule and propagated across the entire project. The skill gets smarter as you go.
Zero setupAll dependencies (Python packages, SnowConvert AI, ODBC drivers) are installed automatically the first time the skill runs.

Skill vs. plain Cortex Code

Migration SkillPlain Cortex Code
Structured multi-stage workflowYes — six stages from connect to migrateNo — you drive every step manually
Automatic state trackingYes — resumes across sessionsNo — you restart context each session
SnowConvert deterministic conversionYes — integratedNo — you run it yourself and import results
Source database connectivityYes — connects, extracts, and migrates dataNo — no built-in database connectors
Deployment wave planningYes — dependency analysis and interactive wave editorNo — you plan deployment order manually
Automated testing loopYes — baseline capture, two-sided validation, auto-fixNo — you write and run tests yourself
Reusable fix rulesYes — extract, search, apply, propagateNo — fixes are one-off

How people use the skill

There are three common ways to start a migration. The skill adapts to each.

Start a new migration (greenfield)

You have a source database and want to migrate it to Snowflake from scratch. The skill walks you through every stage: connect to the source, extract objects, convert code, assess the workload, deploy, migrate data, and validate.

Use the migration-guide skill to migrate a database

Resume an in-progress migration

You already started a migration in a previous session — maybe days or weeks ago. The skill reads your project state and picks up at the exact point where you stopped. No re-extraction, no re-conversion.

Continue

Import an existing migration

You already have .sql files from a previous SnowConvert run or another source. The skill imports them into a project and picks up at the conversion or deployment stage.

Import SQL files from ./my-exported-scripts/

What you can do

The skill guides you through the full migration lifecycle and lets you jump directly to any stage at any time. You can pause and resume across sessions, and multiple users can collaborate on the same project — the skill tracks which code units each person is working on so effort isn’t duplicated.

CapabilityDescription
Connect to sourceSet up a connection to your source database with credentials stored securely for reuse
Extract source codePull DDL and stored procedures directly from a live database, or import local .sql files
Convert codeTranslate source SQL to Snowflake-compatible SQL via SnowConvert, with a full EWI report
AI-assisted conversionAI explains remaining conversion issues, suggests fixes, and applies them interactively
Assess workloadsGenerate an interactive HTML report covering deployment waves, object exclusions, dynamic SQL patterns, and SSIS/Informatica ETL analysis
Deploy objectsDeploy converted tables, views, functions, and procedures to Snowflake wave by wave
Migrate dataCopy rows from source tables to Snowflake with automatic row-count validation
Test functions and proceduresCapture source-side baselines and run two-sided validation to confirm output equivalence
Fix and rule engineCreate reusable fix rules from corrections you make and propagate them across the project automatically
Convert ETL pipelinesTranslate SSIS packages and Informatica workflows to dbt, using deterministic conversion with optional AI-assisted remediation
Validate dataCompare row counts and data between source and Snowflake tables after migration to confirm completeness
Repoint reportsRepoint Power BI reports to use Snowflake as the data source

Supported source systems

Not all capabilities are available for all source systems. The following table shows what is supported today and what is coming soon.

CapabilitySQL ServerRedshiftTeradataOracleOther source systems
Code extractionYesYesPlannedPlanned
Deterministic code conversionYesYesYesYesYes
AI conversion and verificationYesYes
Code deployYesYesPlannedPlanned
SSIS to dbt (deterministic)YesYesYesYesYes
Informatica to dbt (deterministic)YesYesYesYesYes
AI conversion of SSIS to dbtYesYesYesYesYes
AI conversion of Informatica to dbtYesYesYesYesYes
Cloud data migrationYesYesPlannedPlanned
Cloud data validationYesYesYesPlanned
Testing frameworkYesYesPlannedPlanned
Testing with synthetic dataPlannedPlanned
AI assessment (via Cortex Code)YesYesYesPlanned
Power BI report repointingYesYesYesYesSynapse and PostgreSQL only

Other dialects with deterministic conversion support: Azure Synapse, Sybase IQ, Google BigQuery, Greenplum, Netezza, PostgreSQL, Spark SQL, Databricks SQL, Vertica, Hive, IBM DB2

Available extraction scripts: Teradata, SQL Server, Synapse, Oracle, Redshift, Netezza, Vertica, DB2, Hive, BigQuery, Databricks, Sybase IQ


Prerequisites

Before starting a migration, ensure you have:

  • Cortex Code installed (the migration skill is bundled).
  • A Snowflake account with a connection configured in ~/.snowflake/connections.toml.
  • A source database (SQL Server or Redshift) accessible from your machine.

All other dependencies (uv, scai, Python packages) are installed automatically when the skill runs for the first time.


Quick Start

1. Launch Cortex Code:

cortex

2. Start a migration:

Use the migration-guide skill to migrate a database

The migration-guide skill activates automatically, confirms plugin installation with you, registers the plugin, and walks you through the full migration workflow. If a project already exists, the agent picks up where you left off.


Migration workflow

The skill guides you through six stages. Each session starts by detecting your current progress and resuming automatically.

StageNameWhat happens
1ConnectSet up a connection to your source database
2InitCreate a local migration project
3RegisterExtract DDL and code from the source, or import local .sql files
4ConvertTranslate source SQL to Snowflake-compatible SQL via SnowConvert
5AssessGenerate an interactive report covering waves, exclusions, dynamic SQL, and ETL
6MigrateDeploy objects, migrate data, validate output, and fix errors — wave by wave

Stage 6 is where the bulk of the work happens. AI-driven testing, conversion remediation, and iterative fix loops run here — often across multiple sessions and days. This is also the stage where collaboration pays off most: multiple users can work on the same project simultaneously, each picking up different code units while the skill coordinates to prevent duplicated effort.

Every session shows a live progress checklist so you always know where you stand:

  1. Connect              Connected to SQL Server
  2. Init                 Project initialized
  3. Register             342 objects registered
   4. Initial Conv         280/342 converted
   5. Assess               Not run
   6. Migrate Objects      0/120 tables deployed

Skills Reference

The skill is organized as a skill tree. The root skill detects your project state and delegates to the right sub-skill. You can also invoke any skill directly by describing what you want.

Setup Skills (Stages 1–5)

connection

Walks you through connecting to your source database. The agent collects credentials, tests the connection, and saves it for reuse across sessions. Supports:

  • SQL Server — configures ODBC driver, host, port, and authentication.
  • Amazon Redshift — configures host, port, database, and IAM or password authentication.

register-code-units

Gets source code into the migration project. Two paths are available:

PathWhen to use
Extract from databaseYou have a live source connection and want the agent to pull DDL and object code directly
Import local filesYou already have .sql files on disk and want to import them into the project

convert

Runs SnowConvert to translate your source SQL (T-SQL or Redshift SQL) into Snowflake-compatible SQL. After conversion, the agent presents:

  • Total objects converted successfully.
  • EWI (Early Warning Issue) summary broken down by severity (errors, warnings, informational).
  • A list of objects that require manual review.

assessment

Generates an interactive multi-tab HTML report. The assessment includes four analyses that can be run individually or together:

AnalysisWhat it does
Deployment WavesAnalyzes object dependencies to produce an ordered deployment sequence. Objects within a wave have no inter-dependencies; waves are ordered so dependencies are always deployed first.
Object ExclusionIdentifies objects that do not need migration: temporary tables, staging objects, deprecated objects, and test artifacts. Reduces scope before deployment.
Dynamic SQL AnalysisClassifies and scores Dynamic SQL patterns in your converted code. Identifies patterns that Snowflake handles natively, patterns requiring manual rewrite, and patterns with elevated migration complexity.
ETL/SSIS AssessmentAnalyzes SSIS packages individually: classifies each package (Ingestion, Transformation, Export, Orchestration, Hybrid), maps control and data flow, and estimates migration effort.

The report is generated as a single self-contained HTML file. You can iterate on the wave plan interactively — for example, reprioritizing objects, adjusting wave sizes, or relocating specific objects — before locking it for deployment.

Migration Skills (Stage 6)

migrate-objects

The main deploy loop. Processes all objects in the current wave in dependency order:

Object typeWhat happens
TablesDeployed to Snowflake, then data is migrated from the source.
ViewsDeployed to Snowflake. Blocked views retry after their dependent functions/procedures pass.
Functions & ProceduresDeployed, tested against source output, and fixed if tests fail. The loop repeats until tests pass or the user decides to skip.

After each wave completes, the agent automatically advances to the next wave.

baseline-capture

Captures the expected output of source stored procedures and functions for use as test baselines. Two approaches are supported:

ApproachWhen to use
Query LogsYou have CSV logs of real EXEC or CALL statements from your source system. The agent parses these to extract parameters and expected outputs.
AI-AssistedNo logs are available. A swarm of specialized agents generates test cases covering business logic, data-driven scenarios, and edge cases by analyzing the source SQL.

Baselines are stored locally and uploaded to Snowflake so they can be used for two-sided validation (source output vs. Snowflake output) during the migrate-objects loop.

rule-engine

Manages reusable migration rules stored in Snowflake. Rules encode known source-to-Snowflake fix patterns and are shared across all objects in the project. Each rule can operate in two modes:

ModeHow it works
RegexA regex find-and-replace applied mechanically to SQL files
AI-guidedThe rule provides context and strategy; the AI interprets and applies it

The rule engine has four sub-capabilities:

Sub-skillWhat it does
searchScans a SQL file against all rules using regex pattern matching and Cortex semantic search. Returns matched rules ranked by relevance.
applyApplies matched rules to local SQL files. Regex rules are applied automatically; AI-mode rules are shown for review before applying. Supports single-file and batch application.
extractCreates a new reusable rule from a fix you just made. Works from an interactive before/after comparison or retroactively from git history.
propagateGiven a rule, finds every code unit in the project it applies to (via reverse regex + semantic search), then hands off to batch apply.

Rules accumulate over the lifetime of the project. Every time the agent fixes an object and extracts a rule, that rule becomes available to all subsequent objects — reducing manual effort as the migration progresses.


What You Can Ask

You do not need to follow the prescribed path. You can ask for any capability at any time.

Status and navigation

PromptWhat happens
"What is the current state?"Shows the progress checklist
"What should I work on next?"Returns the next dependency-ready object
"Continue"Picks up the prescribed migration path

Setup

"Connect to my SQL Server database"
"Extract objects from the source"
"Import SQL files from ./my-scripts/"
"Convert my source code"

Assessment

"Run a full assessment"
"Generate deployment waves"
"I want a maximum of 30 objects per wave"
"Prioritize all Payroll objects in Wave 1"
"Identify temporary and staging objects"
"Analyze dynamic SQL patterns"
"Assess my SSIS packages"

Migration

"Deploy tables"
"Migrate data"
"Deploy and test the next function"
"Capture baselines for dbo.GetCustomerOrders"

Rule engine

"Search rules for this file"
"Apply all matched rules"
"Extract a rule from my last fix"
"Propagate this rule across the project"
"Show me all rules"

Example Workload

You can use AdventureWorksDW as an example source database to try the skill end-to-end. Substitute any SQL Server or Redshift database you have access to — the skill adapts to whatever source you connect.


Troubleshooting

ProblemResolution
migration-guide skill doesn’t appearEnsure you’re on the latest version of Cortex Code. Run cortex --version to confirm.
Plugin installation fails during skill setupCheck that python3 is available on your PATH. The skill installs dependencies via Homebrew (macOS/Linux) or winget (Windows).
scai not found after skill installRun the install hook manually: migration-plugin/hooks/install-dependencies. Or install directly: brew install --cask snowflakedb/snowconvert-ai/snowconvert-ai.
Snowflake connection errorsVerify your connection in ~/.snowflake/connections.toml and confirm the connection name matches what you provided during setup.
Agent seems lostSay "What is the current state?" — the agent re-reads project status and resets context.

Support

For help with the migration skill, contact: snowconvert-support@snowflake.com