SnowConvert AI CLI (scai) Command Reference

SnowConvert AI (scai) is a CLI tool for accelerated database migration to Snowflake. It manages end-to-end migration workflows including code extraction from source databases, automated conversion to Snowflake SQL, AI-powered code improvement, deployment, data migration, and validation.


Global Options

These options are available on every scai command.

Option

Description

-h, --help

Show help message

-v, --version

Display version information

--log-debug

Enable debug-level logging. Can also be set via the SCAI_LOG_LEVEL env var (accepts: verbose, debug, information, warning, error, fatal).


Quick Start

The basic workflow to get started with scai. For a more detailed walkthrough, see the quick start guide.

1. Create a project (use -c to set a default Snowflake connection):

scai init -n <name> -l <language> -c <connection>

2. Add source code:

For full migration (SQL Server, Redshift) – extract directly from the source database:

scai code extract

For other languages – add source files from disk:

scai code add -i <path>

3. Convert to Snowflake SQL:

scai code convert

Optional additional steps:

# Run AI-powered improvement
scai ai-convert start -w

# Accept AI fixes (after ai-convert completes)
scai ai-convert accept --all

# Deploy to Snowflake
scai code deploy --all

Tip: Using -c <connection> during scai init saves the Snowflake connection in the project, so you don’t need to specify it for each command.

Run scai <command> -h for detailed help on any command.


Commands

scai init

Create a new migration project in the specified directory (or the current directory if PATH is omitted).

scai init [PATH] -l <LANGUAGE> [-n <NAME>] [-i <INPUT_PATH>] [--skip-split] [-c <CONNECTION>]

Prerequisites:

  • Target directory must not contain an existing project

  • Valid source language must be specified (see Supported Languages)

Options:

Option

Description

Required

Default

[PATH]

Directory to create the project in. If omitted, uses the current directory.

No

-n, --name <NAME>

Project name. If omitted, defaults to the target folder name.

No

-l, --source-language <LANGUAGE>

Source language for the project.

Yes

-i, --input-code-path <PATH>

Path to source code files to add during initialization. SQL Server and Redshift sources are processed through the arrange and assess pipeline; other languages are copied directly to source/.

No

--skip-split

Skip the arrange/split phase when source code is already split (SQL Server and Redshift only). Requires --input-code-path.

No

false

-c, --connection <NAME>

Snowflake connection name to save as project default.

No

Behavior notes:

  • Creates the project directory structure and configuration files.

  • When --input-code-path is provided: SQL Server and Redshift run the arrange and parse-and-assess pipeline, promote processed files to source/, and generate a code unit registry. Other languages copy source directly to source/.

  • When --skip-split is used with --input-code-path (SQL Server and Redshift only), skips the arrange/split phase, promotes raw source directly to source/, runs assessment only for code unit registry generation, and marks the project as type Full (new folder structure).

  • Redshift source files require paired SC tags (e.g., -- <sc-table> table_name </sc-table>) for the arrange step. If validation or arrange fails, the project is still created but source code is not added. Recovery: fix source files and run scai code add -i <path>, or use --skip-split if the code is already split.

Project folder structure:

Path

Description

.scai/

Project configuration

.scai/config/

project.yml and local settings

artifacts/

Intermediate processing artifacts

source/

Processed source code (populated by --input-code-path or code add)

snowflake/

Converted code, reports, and logs

Examples:

# Create a project in a new folder (recommended)
scai init my-project -l Teradata

# Create a project in the current directory
scai init -l Teradata

# Create project with source code
scai init my-project -l Oracle -i /path/to/code

# Create project with pre-split source code (skip arrange phase)
scai init my-project -l SqlServer -i /path/to/code --skip-split

# Create project with a specific Snowflake connection
scai init my-project -l Oracle -c my-snowflake-conn

scai project

View and manage project configuration.

scai project info

Display project details including name, source language, and status.

scai project info

Prerequisites:

  • Must be run from within a migration project directory.

Output fields: Project Name, Project ID, Source Language, Snowflake Connection, Project Root.

Examples:

scai project info

scai project set-default-connection

Set the default Snowflake connection for the current project.

scai project set-default-connection -c <CONNECTION>

Prerequisites:

  • A migration project initialized with scai init.

  • Snowflake connection available in connections.toml or config.toml.

Options:

Option

Description

Required

-c, --connection <NAME>

Name of the Snowflake connection to set as the project default.

Yes

Connection precedence:

  1. -c/--connection option (per-command override)

  2. Project connection (set by this command)

  3. Default TOML connection

Examples:

# Set project default connection
scai project set-default-connection -c my-snowflake

# Change to production connection
scai project set-default-connection -c prod-snowflake

scai connection

Manage source database connections (Redshift, SQL Server).

scai connection add-sql-server

Add a new SQL Server source database connection.

scai connection add-sql-server [OPTIONS]

Prerequisites:

  • Network access to the SQL Server instance.

  • For Windows auth: valid domain credentials.

  • For standard auth: SQL Server username and password.

Authentication methods: Windows Authentication (Integrated Security), Standard Authentication (username/password).

Operation modes: Interactive (prompts for all required information – recommended) or Inline (command-line options for automation/CI).

Connections are saved to ~/.snowflake/connections.toml (or project-local).

Options:

Option

Description

Required

-c, --connection <NAME>

Name for this connection.

No

--auth <AUTH>

Authentication method (windows, standard).

No

--user <USER>

Username.

No

--database <DATABASE>

Database name.

No

--connection-timeout <SECONDS>

Connection timeout in seconds.

No

--server-url <URL>

SQL Server URL.

No

--port <PORT>

Port number.

No

--password <PASSWORD>

Password.

No

--trust-server-certificate

Trust server certificate.

No

--encrypt

Encrypt connection.

No

Examples:

# Add connection interactively (recommended)
scai connection add-sql-server

# Windows Authentication
scai connection add-sql-server --connection my-sqlserver --auth windows --server-url localhost --database mydb

# Standard Authentication
scai connection add-sql-server --connection my-sqlserver --auth standard --server-url localhost --database mydb --username sa

scai connection add-redshift

Add a new Redshift source database connection.

scai connection add-redshift [OPTIONS]

Prerequisites:

  • Network access to the Redshift cluster/serverless endpoint.

  • For IAM auth: AWS credentials configured (AWS CLI or environment variables).

  • For standard auth: username and password.

Authentication methods: IAM Serverless (AWS IAM with Redshift Serverless), IAM Provisioned (AWS IAM with Provisioned Cluster), Standard (username/password).

Operation modes: Interactive (recommended) or Inline (for automation/CI).

Connections are saved to ~/.snowflake/connections.toml (or project-local).

Options:

Option

Description

Required

-c, --connection <NAME>

Name for this connection.

No

--auth <AUTH>

Authentication method (iam-serverless, iam-provisioned-cluster, standard).

No

--user <USER>

Username.

No

--database <DATABASE>

Database name.

No

--connection-timeout <SECONDS>

Connection timeout in seconds.

No

--workgroup <NAME>

Redshift Serverless workgroup name.

No

--cluster-id <ID>

Redshift Provisioned Cluster ID.

No

--region <REGION>

AWS region.

No

--access-key-id <KEY>

AWS Access Key ID.

No

--secret-access-key <KEY>

AWS Secret Access Key.

No

--host <HOST>

Redshift host.

No

--port <PORT>

Port number.

No

--password <PASSWORD>

Password.

No

Examples:

# Add connection interactively (recommended)
scai connection add-redshift

# IAM Serverless (inline)
scai connection add-redshift --connection my-redshift --auth iam-serverless \
  --workgroup my-workgroup --database mydb --region us-east-1

scai connection set-default

Set the default source connection for a database type.

scai connection set-default -l <LANGUAGE> -c <CONNECTION>

Prerequisites:

  • Connection already added with scai connection add-redshift or scai connection add-sql-server.

Options:

Option

Description

Required

-l, --source-language <LANGUAGE>

Database type of the connection (SqlServer, Redshift).

Yes

-c, --connection <NAME>

Name of the source connection to set as default.

Yes

Examples:

# Set default Redshift connection
scai connection set-default -l redshift --connection prod

# Set default SQL Server connection
scai connection set-default -l sqlserver --connection dev

scai connection list

List connections for a given source database.

scai connection list [-l <LANGUAGE>]

Options:

Option

Description

Required

-l, --source-language <LANGUAGE>

Source language of the connection (SqlServer, Redshift). If omitted, shows a summary of all connections.

No

Output: Table with columns: Name, Default, Host, Database.

Examples:

# List all connections summary
scai connection list

# List Redshift connections
scai connection list -l redshift

# List SQL Server connections
scai connection list -l sqlserver

scai connection test

Test a source database connection.

scai connection test -l <LANGUAGE> [-c <CONNECTION>]

Prerequisites:

  • Connection already configured.

  • Network access to the database.

Options:

Option

Description

Required

-l, --source-language <LANGUAGE>

Source language of the connection (SqlServer, Redshift).

Yes

-c, --connection <NAME>

Name of the connection to test.

No

Examples:

# Test SQL Server connection
scai connection test -l sqlserver -c my-sqlserver

# Test Redshift connection
scai connection test -l redshift -c my-redshift

scai code

Code operations: add, extract, convert, deploy, find, accept, where, resync.

scai code add

Add source code from an input file or directory to the project’s source/ folder.

scai code add -i <INPUT_PATH> [--skip-split] [OPTIONS]

Prerequisites:

  • A migration project initialized with scai init.

  • Input must be a valid SQL source file or a directory containing SQL source files.

Options:

Option

Description

Required

Default

-i, --input-path <PATH>

Path to a source code file or directory to add to the project.

Yes

--overwrite

Overwrite existing files in the project’s source/ folder if they conflict.

No

false

--skip-split

Skip the arrange/split phase when source code is already split (SQL Server and Redshift only).

No

false

--source-id <SOURCE_ID>

Identifier for the source system (e.g., server hostname). Recorded in the code unit registry under codeStatus.registration.sourceId. Defaults to the local machine name.

No

Behavior notes:

  • SQL Server and Redshift: runs arrange-only, produces artifacts/source_raw_Processed/, merges into source/.

  • Other languages: copies source directly into source/.

  • Checks for conflicting files when destination folders are non-empty (unless --overwrite is set).

Examples:

# Add source code to project
scai code add -i /path/to/source/code

# Add pre-split source code (skip arrange phase)
scai code add -i /path/to/source/code --skip-split

# Add a single file
scai code add -i /path/to/script.sql

# Add code overwriting existing files
scai code add -i /path/to/source/code --overwrite

# Add code with a source identifier for traceability
scai code add -i /path/to/source/code --source-id prod-sql-server-01

scai code extract

Extract code from the source database.

scai code extract [OPTIONS]

Supported languages: SQL Server, Redshift.

Prerequisites:

  • A migration project initialized with scai init.

  • Source database connection configured (use scai connection add-redshift or scai connection add-sql-server).

  • Network access to the source database.

Options:

Option

Description

Required

Default

-s, --source-connection <NAME>

Name of the source connection to extract code from.

No

--schema <SCHEMA>

Schema name to extract code from.

No

-t, --object-type <TYPES>

Object types to extract (comma-separated, e.g., TABLE,VIEW,PROCEDURE).

No

-n, --name <PATTERN>

Filter objects by name. Supports substring match or wildcard patterns with * (e.g., emp or Get*Data).

No

-i, --interactive

Interactive mode: browse and select schemas, object types, and filter by name.

No

false

--source-id <SOURCE_ID>

Identifier for the source system. Recorded in the code unit registry under codeStatus.registration.sourceId. Defaults to the server hostname from the source connection.

No

Interactive mode:

Requires an interactive terminal. In non-interactive or CI environments, use --schema, --object-type, and --name instead.

  • Pre-fetch phase: prompt for schema (or leave empty for all) and multi-select object types to scope the catalog query.

  • Post-fetch phase: multi-select schemas to include, optional name filter (wildcard * supported), summary table, then confirm extraction.

Options --schema, -t/--object-type, and -n/--name pre-fill the interactive prompts when used with -i.

Output structure:

source/
  └── <database>/
      └── <schema>/
          └── <type>/
              └── *.sql

Examples:

# Interactive extraction (browse and select)
scai code extract -i

# Interactive with pre-filled schema
scai code extract -i --schema public

# Extract tables from a schema
scai code extract --schema public --object-type TABLE

# Extract tables and views
scai code extract --object-type TABLE,VIEW

# Extract from all schemas
scai code extract

# Extract code with a custom source identifier
scai code extract --source-id prod-redshift-cluster

scai code convert

Transform source database code to Snowflake SQL.

scai code convert [OPTIONS]

Prerequisites:

  • A migration project initialized with scai init.

  • Source code in the source/ folder (from scai code extract, scai code add, or manual copy).

Options:

Option

Description

Required

-h, --help

Display all conversion settings available for the project’s source language.

No

-e, --etl-replatform-sources-path <PATH>

Path to ETL replatform source files for cross-project code analysis.

No

-p, --powerbi-repointing <PATH>

Path to Power BI files for input repointing.

No

-x, --show-ewis

Show detailed EWI (Early Warning Issues) table instead of summary.

No

--context-path <PATH>

Path to read migration context from. Defaults to .scai/conversion-context. Generated context is always written to .scai/conversion-context.

No

--overwrite-working-directory

Overwrite the output files in the snowflake/ directory and the Code Unit Registry files.

No

--where <WHERE>

SQL-like filter to select which code units to convert (see WHERE Clause Reference). Only matched units are transformed; dependencies are still parsed for symbol resolution.

No

Dialect-specific settings:

Additional options are dynamically loaded based on the project’s source language. Run scai code convert --help within a project to see all available options for that dialect.

Common options available across multiple dialects:

Option

Description

Dialects

-m, --comments

Comment nodes that have missing dependencies.

SQL Server, Oracle, Teradata

--encoding <ENCODING>

File encoding for source files (default: UTF-8).

All

-s, --customschema <SCHEMA>

Custom schema name to apply.

SQL Server, Oracle, Teradata

-d, --database <DATABASE>

Custom database name to apply.

SQL Server, Oracle, Teradata

--useexistingnamequalification

Preserve existing name qualification from input code. Must be used with -d or -s.

SQL Server, Oracle, Teradata

--renamingfile <PATH>

Path to a file that specifies new names for objects.

Redshift, SQL Server, Teradata

--arrange

Arrange the code before translation.

SQL Server, Oracle, Teradata, Redshift

-t, --pltargetlanguage <LANGUAGE>

Target language for stored procedure transformation (SnowScript or JavaScript). Default: SnowScript.

SQL Server, Oracle, Teradata

-w, --warehouse <NAME>

Warehouse name for dynamic table refresh. Default: UPDATE_DUMMY_WAREHOUSE.

SQL Server, Oracle, Teradata, Databricks, Spark

--targetlag <VALUE>

Target lag for dynamic tables (e.g., 1 day).

SQL Server, Oracle, Teradata, Databricks, Spark

--previewflags <FLAGS>

Feature flags to enable Snowflake preview features.

All

--createestimationreports

Generate estimation reports.

All

Output structure:

snowflake/
  ├── Output/                     Converted Snowflake SQL files
  │   └── <schema>/
  │       └── *.sql
  ├── Reports/
  │   ├── TopLevelCodeUnits.csv   List of all converted objects
  │   ├── Issues.csv              Conversion issues/warnings
  │   └── Summary.html            HTML conversion summary
  └── Logs/                       Conversion log files

Examples:

# Convert using project defaults
scai code convert

# Show all conversion settings for the project's dialect
scai code convert --help

# Convert with custom schema
scai code convert --customschema MY_SCHEMA

# Convert with comments on missing dependencies
scai code convert --comments

# Convert with object renaming file
scai code convert --renamingfile /path/to/renaming.json

# Convert with custom context path
scai code convert --context-path /path/to/context

# Convert only procedures
scai code convert --where "target.objectType = 'procedure'"

scai code deploy

Deploy converted SQL code to Snowflake.

scai code deploy [OPTIONS]

Prerequisites:

  • Converted code in snowflake/Output/ (from scai code convert).

  • Snowflake connection configured (set with scai init -c or project settings).

  • Appropriate Snowflake privileges (CREATE TABLE, CREATE VIEW, etc.).

Options:

Option

Description

Required

Default

-c, --connection <NAME>

The Snowflake connection to use. Uses default if not specified.

No

-d, --database <NAME>

Target database name for deployment. Also sets the connection database if not already configured.

No

--warehouse <WAREHOUSE>

Warehouse for the Snowflake connection. Only applied if the connection does not already have one.

No

--schema <SCHEMA>

Schema for the Snowflake connection. Only applied if the connection does not already have one.

No

--role <ROLE>

Role for the Snowflake connection. Only applied if the connection does not already have one.

No

--where <WHERE_CLAUSE>

SQL-like WHERE clause to filter objects to deploy (see WHERE Clause Reference).

No

-a, --all

Deploy all successfully converted objects without selection prompt.

No

false

-r, --retry <N>

Number of retry attempts for failed object deployments.

No

1

--continue-on-error

Continue deploying remaining objects even if some fail.

No

true

--include-dependencies

When used with --where, also deploy the dependencies of the filtered code units. No effect without --where.

No

false

Behavior notes:

  • --warehouse, --schema, and --role temporarily set missing connection fields (in-memory only; the TOML file is not modified).

  • If the connection already has a value for an overridden field, an error is returned.

Examples:

# Deploy using default connection
scai code deploy

# Deploy all objects
scai code deploy --all

# Deploy with specific connection
scai code deploy --connection my-snowflake

# Deploy with temporary warehouse override
scai code deploy --warehouse MY_WH

# Deploy filtered objects and their dependencies
scai code deploy --where "target.objectType = 'procedure'" --include-dependencies

scai code find

Find code units from the project’s Code Unit Registry.

scai code find [OPTIONS]

Prerequisites:

  • A migration project initialized with scai init.

  • An initialized Code Unit Registry (generated after scai code convert).

Options:

Option

Description

Required

Default

--where <WHERE_CLAUSE>

SQL-like WHERE clause to filter objects (see WHERE Clause Reference).

No

--no-limit

Disable the default 100-row limit on displayed objects.

No

false

Output: Table with columns: Id, Fully Qualified Name, Object Type.

Examples:

# Find all code units
scai code find

# Find code units with a specific name
scai code find --where "source.name = 'my_table'"

# Find all code units without limit
scai code find --no-limit

scai code accept

Accept the latest converted artifact versions into the snowflake/ output folder.

scai code accept [OPTIONS]

Prerequisites:

  • A migration project initialized with scai init.

  • Source code must be split and registry files must be generated (run scai code add).

  • At least one code conversion run with scai code convert.

Options:

Option

Description

Required

--where <WHERE>

Filter expression to select which objects to accept (see WHERE Clause Reference).

No

Behavior notes:

  • Scans the artifacts/ directory for timestamped conversion outputs.

  • For each code unit, selects the most recent version based on the timestamp folder name (yyyyMMdd.HHmmss).

  • Copies the latest .sql files into the snowflake/ folder, preserving directory structure.

Examples:

# Accept all latest artifacts
scai code accept

scai code where

Show the WHERE clause query reference for code unit filtering.

scai code where

This command displays all queryable fields, supported operators, and usage examples for WHERE clause filtering. It does not require a project directory or network access. The reference is generated at runtime from the Code Unit Registry schema.

Field naming conventions:

  • Field names use camelCase with dot-notation: source.objectType, target.objectType, codeStatus.conversion.status, codeStatus.aiVerification.status, codeStatus.registration.status

  • Enum values are lowercase: 'table', 'procedure', 'view', 'function', 'completed', 'failed', 'pending', 'excluded'

Commands that support --where:

Command

Usage

scai code accept

--where <EXPRESSION>

scai code convert

--where <EXPRESSION>

scai code deploy

--where <EXPRESSION>

scai code find

--where <EXPRESSION>

scai ai-convert start

--where <EXPRESSION>

scai ai-convert accept

--where <EXPRESSION>

scai data migrate

--where <EXPRESSION>

scai data validate

--where <EXPRESSION>

Examples:

# Show WHERE clause reference
scai code where

scai code resync

Re-scan modified converted files and update issue metadata in the Code Unit Registry.

scai code resync

Prerequisites:

  • A migration project initialized with scai init.

  • Code converted with scai code convert.

Behavior notes:

  • Detects code units whose converted files have been modified.

  • Re-scans each modified file for SnowConvert issue codes (EWI, FDM, OOS, PRF).

  • Updates the issue metadata in the registry.

Examples:

scai code resync

scai ai-convert

AI-powered code improvement and test generation.

scai ai-convert start

Start AI-powered code conversion on converted code.

scai ai-convert start [OPTIONS]

Prerequisites:

  • Code converted with scai code convert (generates TopLevelCodeUnits report).

  • Snowflake connection configured with snow connection add.

  • CREATE MIGRATION privilege granted on the Snowflake account.

  • A warehouse configured in the Snowflake connection.

  • Must accept AI disclaimers (interactive prompt or -y flag).

Options:

Option

Description

Required

Default

-c, --connection <NAME>

Snowflake connection for AI code conversion.

No

--selector <PATH>

Path to object selector file (YAML). Only for code_conversion_only projects.

No

-i, --instructions <PATH>

Path to instructions file with custom AI conversion configuration.

No

-w, --watch

Display job progress until completion (may take several minutes to hours).

No

false

-y, --accept-disclaimers

Accept all AI disclaimers without prompting (required for non-interactive use).

No

false

--where <WHERE_CLAUSE>

SQL-like WHERE clause to filter objects to convert (see WHERE Clause Reference).

No

--warehouse <WAREHOUSE>

Warehouse for the Snowflake connection. Only applied if the connection does not already have one.

No

--schema <SCHEMA>

Schema for the Snowflake connection. Only applied if the connection does not already have one.

No

--role <ROLE>

Role for the Snowflake connection. Only applied if the connection does not already have one.

No

--database <DATABASE>

Database for the Snowflake connection. Only applied if the connection does not already have one.

No

Testing modes:

  • Default: Tests converted code on Snowflake only.

  • Source system verification: Also runs tests against the source database (requires an instructions file).

Output structure:

ai-converted/
  └── JOB_<timestamp>_<id>/
      ├── fixed/           AI-improved SQL files by object type/schema
      └── tests_sql/       Generated regression tests by database/schema

Examples:

# Start AI code conversion
scai ai-convert start

# Start and wait for completion
scai ai-convert start -w

# Filter with selector (code_conversion_only)
scai ai-convert start --selector my-selector.yml

# Non-interactive (CI/CD)
scai ai-convert start -y -w

# Source system verification
scai ai-convert start -i config/instructions.yml

# Start with temporary warehouse override
scai ai-convert start --warehouse MY_WH

# Convert only tables using WHERE clause
scai ai-convert start --where "target.objectType = 'table'"

scai ai-convert status

Check the status of an AI code conversion job.

scai ai-convert status [JOB_ID] [OPTIONS]

Prerequisites:

  • A job started with scai ai-convert start.

  • Snowflake connection (uses the job’s connection if --connection not specified).

Options:

Option

Description

Required

Default

[JOB_ID]

The job ID to check. If omitted, checks the last started job.

No

-c, --connection <NAME>

Override the Snowflake connection.

No

-w, --watch

Monitor progress until completion. For finished jobs, forces a server-side refresh and downloads detailed results.

No

false

--warehouse <WAREHOUSE>

Warehouse override (if not already configured on connection).

No

--schema <SCHEMA>

Schema override (if not already configured on connection).

No

--role <ROLE>

Role override (if not already configured on connection).

No

--database <DATABASE>

Database override (if not already configured on connection).

No

Examples:

# Check last job status
scai ai-convert status

# Check specific job
scai ai-convert status JOB_20260112041123_XYZ

# Wait and download results
scai ai-convert status -w

# Use different connection
scai ai-convert status -c other-snowflake

scai ai-convert cancel

Cancel a running AI code conversion job.

scai ai-convert cancel [JOB_ID] [OPTIONS]

Prerequisites:

  • A running job started with scai ai-convert start.

  • Snowflake connection (uses the job’s connection if --connection not specified).

Options:

Option

Description

Required

[JOB_ID]

The job ID to cancel. If omitted, cancels the last started job.

No

-c, --connection <NAME>

Override the Snowflake connection.

No

--warehouse <WAREHOUSE>

Warehouse override (if not already configured on connection).

No

--schema <SCHEMA>

Schema override (if not already configured on connection).

No

--role <ROLE>

Role override (if not already configured on connection).

No

--database <DATABASE>

Database override (if not already configured on connection).

No

Examples:

# Cancel last job
scai ai-convert cancel

# Cancel specific job
scai ai-convert cancel JOB_20260112041123_XYZ

# Use different connection
scai ai-convert cancel -c other-snowflake

scai ai-convert list

List AI code conversion jobs for the current project.

scai ai-convert list [OPTIONS]

Prerequisites:

  • A migration project initialized with scai init.

  • Snowflake connection for refreshing job status.

Options:

Option

Description

Required

Default

-l, --limit <N>

Maximum number of jobs to display.

No

10

-a, --all

Show all jobs (ignores limit).

No

-c, --connection <NAME>

Override the Snowflake connection for refreshing job status.

No

--warehouse <WAREHOUSE>

Warehouse override (if not already configured on connection).

No

--schema <SCHEMA>

Schema override (if not already configured on connection).

No

--role <ROLE>

Role override (if not already configured on connection).

No

--database <DATABASE>

Database override (if not already configured on connection).

No

Output: Table with columns: Job ID, Status, Start Time, Duration, Objects. Possible status values: PENDING, IN_PROGRESS, FINISHED, FAILED, CANCELLED.

Examples:

# List recent jobs
scai ai-convert list

# Show all jobs
scai ai-convert list --all

# Refresh with different connection
scai ai-convert list -c other-snowflake

scai ai-convert accept

Review, compare, and accept AI-suggested fixes from a completed verification job.

scai ai-convert accept [JOB_ID] [OPTIONS]

Prerequisites:

  • A completed AI code conversion job (run scai ai-convert start first).

  • If using --selector: a selector file (code_conversion_only). Create with scai object-selector create.

  • If using --where: full migration project. Run scai code where for syntax.

Options:

Option

Description

Required

Default

[JOB_ID]

The job ID to accept changes for. If omitted, uses the last finished job.

No

-i, --interactive

Review each code unit one by one with options to accept, verify, or compare.

No

false

-o, --selector <PATH>

Path to object selector file (YAML). Only for code_conversion_only.

No

--where <WHERE_CLAUSE>

SQL-like WHERE clause to filter which objects to accept (see WHERE Clause Reference). Full migration projects only.

No

--all

Replace all converted files with their AI-fixed versions without prompting.

No

false

--summary

Show a summary of what would be affected without making changes.

No

true

--json

Output results in JSON format (for automation). Works with --summary.

No

false

Review modes:

  • Summary (--summary, default): Preview affected code units without making changes.

  • Interactive (-i): Review each code unit with accept/verify/diff options.

  • All (--all): Accept all AI-suggested fixes without prompting.

Interactive actions:

  • [d] Diff – open diff tool to compare original and AI-fixed code

  • [v] Verify – mark as verified (you applied changes manually)

  • [a] Accept – overwrite converted file with AI fix

  • [s] Skip – decide later

  • [q] Quit – exit (progress is saved)

Examples:

# Interactive review
scai ai-convert accept -i

# Accept all fixes
scai ai-convert accept --all

# Accept from selector
scai ai-convert accept --all -o selector.yml

# Accept filtered by WHERE (full migration)
scai ai-convert accept --all --where "target.objectType = 'table'"

# Preview changes
scai ai-convert accept --summary

# JSON for automation
scai ai-convert accept --summary --json

scai data

Data operations: migrate and validate.

scai data migrate

Migrate data from the source system into a Snowflake account.

scai data migrate [OPTIONS]

Prerequisites:

  • Code converted with scai code convert (generates TopLevelCodeUnits report).

  • Code deployed with scai code deploy (creates target tables in Snowflake).

  • Source database connection configured.

  • Snowflake connection configured with INSERT privileges.

  • If using --selector: a selector file (code_conversion_only). Create with scai object-selector create.

  • If using --where: full migration project; filter tables from Code Unit Registry.

  • For Redshift: S3 bucket, Snowflake Storage Integration, and External Stage configured.

Options:

Option

Description

Required

-s, --source-connection <NAME>

Source connection to extract data from. Uses default if not specified.

No

-c, --connection <NAME>

Snowflake connection to migrate data to. Uses default if not specified.

No

-o, --selector <PATH>

Selector file for migration (code_conversion_only). If not provided, all tables are migrated.

No

--where <WHERE_CLAUSE>

SQL-like WHERE clause to filter tables from the Code Unit Registry (see WHERE Clause Reference). Full migration projects only.

No

-b, --bucket-uri <URI>

(Redshift only) S3 bucket URI for staging data (e.g., s3://my-bucket/path).

No

--stage <STAGE_NAME>

(Redshift only) Fully qualified Snowflake stage name for loading parquet files (e.g., database.schema.stage_name).

No

-i, --iam-role-arn <ARN>

(Redshift only) IAM role ARN to unload parquet files to S3.

No

--warehouse <WAREHOUSE>

Warehouse override (if not already configured on connection).

No

--schema <SCHEMA>

Schema override (if not already configured on connection).

No

--role <ROLE>

Role override (if not already configured on connection).

No

--database <DATABASE>

Database override (if not already configured on connection).

No

Examples:

# Migrate all tables
scai data migrate --source-connection my-redshift --connection my-snowflake

# Migrate selected tables (selector)
scai data migrate --source-connection my-redshift --connection my-snowflake \
  --selector my-selector.yml

# Migrate filtered by WHERE (full migration)
scai data migrate --source-connection my-redshift --connection my-snowflake \
  --where "source.schema = 'public'"

scai data validate

Compare data between source and Snowflake to verify data integrity.

scai data validate [OPTIONS]

Prerequisites:

  • Source database connection configured.

  • Snowflake connection configured.

  • Tables must exist in both source and target databases.

Options:

Option

Description

Required

-s, --source-connection <SOURCE_CONNECTION>

Source connection to use. Uses default if not specified. Ignored when --snowflake-source is specified.

No

--snowflake-source <CONNECTION_NAME>

Snowflake connection to use as the source (for Snowflake-to-Snowflake validation).

No

-c, --connection <CONNECTION>

Snowflake target connection. Uses default if not specified.

No

-d, --target-database <DATABASE>

Target Snowflake database for validation.

No

-o, --selector <PATH>

Selector file for validation (code_conversion_only).

No

--where <WHERE_CLAUSE>

SQL-like WHERE clause to filter tables from the Code Unit Registry (see WHERE Clause Reference). Full migration projects only.

No

-m, --db-mapping <MAPPING>

Database name mapping in format source:target. Can be specified multiple times.

No

-e, --schema-mapping <MAPPING>

Schema name mapping in format source:target. Can be specified multiple times.

No

-f, --config-file <CONFIG_FILE>

Path to an existing data validation config file (YAML). When provided, uses this config instead of generating one.

No

Output structure:

results/data-validation/run-YYYY-MM-DD-HH-mm-ss/
  ├── *_data_validation_report.csv   Main validation results
  ├── *_data_validation_report.html  HTML report for review
  └── logs/                          Detailed execution logs

Examples:

# Validate all tables from report
scai data validate

# Validate with selector file
scai data validate --selector my-tables.yml

# Validate filtered by WHERE (full migration)
scai data validate --where "source.schema = 'public'"

# With target database
scai data validate --target-database PROD_DB

# With name mappings
scai data validate --db-mapping "sourcedb:TARGETDB" --schema-mapping "dbo:PUBLIC"

# With explicit connections
scai data validate --source-connection my-sqlserver --connection my-snowflake

scai test

Generate test cases for migrated stored procedures.

scai test seed

Generate YAML test case files from an execution log of stored procedure calls.

scai test seed --execution-log <EXECUTION_LOG> [--source-connection <SOURCE_CONNECTION>] [--connection <CONNECTION>] [OPTIONS]

Prerequisites:

  • A migration project initialized with scai init.

  • An execution log file produced by running the original stored procedures.

  • Code converted with scai code convert.

Options:

Option

Description

Required

Default

-e, --execution-log <EXECUTION_LOG>

Path to the execution log file.

Yes

-s, --source-connection <SOURCE_CONNECTION>

Source connection to use. Uses default if not specified.

No

-c, --connection <CONNECTION>

Snowflake connection. Uses project/default if not specified.

No

-m, --max-cases <MAX_CASES>

Maximum number of test cases to generate per procedure.

No

10

-a, --append

Append test cases to existing test files instead of replacing them.

No

Output: One YAML test case file per procedure at artifacts/<target_db>/<target_schema>/<object_type>/.../<procedure_name>.yml.

Examples:

# Generate test cases
scai test seed --execution-log artifacts/exec_log.csv

# Limit to 5 cases per procedure
scai test seed --execution-log artifacts/exec_log.csv --max-cases 5

# Append to existing test files
scai test seed --execution-log artifacts/new_exec_log.csv --append

scai test capture

Capture test baselines from the source database.

scai test capture [--source-connection <SOURCE_CONNECTION>] [--connection <CONNECTION>] [OPTIONS]

Prerequisites:

  • A migration project initialized with scai init.

  • Test YAML files in artifacts/**/test/*.yml (generated by scai test seed).

  • A configured source database connection.

Options:

Option

Description

Required

-s, --source-connection <SOURCE_CONNECTION>

Source connection to use. Uses default if not specified.

No

-c, --connection <CONNECTION>

Snowflake connection (for baseline stage upload). Uses project/default if not specified.

No

--baseline-dir <BASELINE_DIR>

Directory to write baseline files to. Defaults to {project}/.scai/baselines.

No

Output: JSON baseline files written to .scai/baselines/ (or the directory specified by --baseline-dir).

Examples:

# Capture baselines
scai test capture

# With explicit connections
scai test capture --source-connection my-sqlserver --connection my-snowflake

# Custom baseline directory
scai test capture --baseline-dir ./my-baselines

scai test validate

Validate Snowflake procedures against captured baselines.

scai test validate [--connection <CONNECTION>] [OPTIONS]

Prerequisites:

  • A migration project initialized with scai init.

  • Baselines captured with scai test capture.

  • A configured Snowflake connection.

Options:

Option

Description

Required

-c, --connection <CONNECTION>

Snowflake connection. Uses project/default if not specified.

No

--baseline-dir <BASELINE_DIR>

Directory containing baseline files. Falls back to Snowflake stage if not specified.

No

--baseline-stage <BASELINE_STAGE>

Snowflake stage containing baselines. Uses implicit default stage if not specified.

No

--pattern <PATTERN>

Regex pattern to filter test files by procedure name. Defaults to all procedures.

No

--create-schema

Create the VALIDATION schema and objects before running.

No

Examples:

# Validate all procedures
scai test validate

# Filter by procedure name
scai test validate --pattern "my_proc.*"

# Create validation schema first
scai test validate --create-schema

# Use local baselines
scai test validate --baseline-dir ./my-baselines

scai object-selector

Create selector files for filtering objects.

scai object-selector create

Create a selector file to filter objects for data migration and other operations.

scai object-selector create [OPTIONS]

Prerequisites:

  • Code converted with scai code convert (generates TopLevelCodeUnits report).

Options:

Option

Description

Required

-d, --database <NAME>

Filter objects by source database name.

No

-s, --schema <NAME>

Filter objects by source schema name.

No

-t, --type <TYPES>

Filter objects by type (comma-separated, e.g., table,view,procedure).

No

-n, --name <NAME>

Label for the selector file (becomes <name>.<timestamp>.yml). Defaults to object-selector.<timestamp>.yml.

No

Output: A YAML selector file with the following structure:

objects:
  - code_unit_id: <database>.<schema>.<name>
    type: TABLE | VIEW | PROCEDURE | ...
    source: { database, schema, name }
    target: { database, schema, name }

Examples:

# Create selector file
scai object-selector create

# Create with custom output path
scai object-selector create -o custom-selector.yml

scai query

Execute SQL queries on source database systems.

scai query -q <QUERY> -s <CONNECTION> [-l <LANGUAGE>]

Prerequisites:

  • Source database connection configured via scai connection add-sql-server or scai connection add-redshift.

  • Network access to the source database.

Options:

Option

Description

Required

-q, --query <QUERY>

SQL query to execute on the source system.

Yes

-s, --source-connection <CONNECTION>

Source connection to use for query execution.

Yes

-l, --source-language <LANGUAGE>

Source database type (SqlServer, Redshift). Auto-detected from the connection name if omitted.

No

Output: Query results printed as a formatted table (limited to 1000 rows).

Examples:

# Execute simple query
scai query -q "SELECT 1;" -s my-sqlserver

# Check table row count
scai query -q "SELECT COUNT(*) FROM customers" -s my-redshift

# Query with filter
scai query -q "SELECT * FROM orders WHERE status = 'pending'" -s my-connection

# Query with explicit source language
scai query -q "SELECT COUNT(*) FROM users" -s my-sqlserver -l SqlServer

scai logs

Display the location of CLI log files and list recent entries.

scai logs [--last <COUNT>] [--open]

Options:

Option

Description

Required

Default

--last <COUNT>

Number of recent log files to display.

No

5

--open

Open the log directory in the system file explorer.

No

false

Examples:

# Show recent log files
scai logs

# Show the last 10 log files
scai logs --last 10

# Open log directory in file explorer
scai logs --open

scai license

Install offline license for air-gapped environments.

scai license install

Install an offline license for running conversions without online activation.

scai license install -p <LICENSE_PATH>

Prerequisites:

  • A valid offline license file (.lic) from Snowflake.

Use cases:

  • Running in air-gapped environments without internet.

  • CI/CD pipelines that cannot use online activation.

  • Environments with restricted network access.

Options:

Option

Description

Required

-p, --path <LICENSE_PATH>

Path to the license file to install.

Yes

Examples:

scai license install --path /path/to/license.lic

Supported Languages

scai supports two project types depending on the source language.

Full Migration

These languages support the complete migration workflow: code extraction from a live source database, conversion, AI improvement, deployment, data migration, and validation.

Language

SqlServer

Redshift

Code Conversion Only

These languages support code conversion from files on disk. Source code is added manually via scai code add or scai init -i.

Language

Oracle

Teradata

BigQuery

Databricks

Greenplum

Sybase

Postgresql

Netezza

Spark

Vertica

Hive

Db2


Workflows

Full Migration (SQL Server / Redshift)

Complete migration workflow for full project types with source database connectivity.

# 1. Create project
scai init my-migration -l SqlServer

# 2. Add source connection
scai connection add-sql-server

# 3. Extract from source
scai code extract

# 4. Convert to Snowflake
scai code convert

# 5. AI improvement (optional)
scai ai-convert start -w

# 6. Accept AI fixes (optional, after step 5)
scai ai-convert accept --all

# 7. Deploy to Snowflake
scai code deploy --all

# 8. Migrate data (optional)
scai data migrate --source-connection my-sqlserver --connection my-snowflake

Code Conversion Only

Workflow for projects without source database connectivity. Source code is added from local files.

# 1. Create project
scai init my-migration -l Teradata

# 2. Add source code
scai code add -i /path/to/teradata/code

# 3. Convert to Snowflake
scai code convert

# 4. Deploy to Snowflake
scai code deploy --all

Snowflake Connection

SnowConvert AI uses the Snowflake CLI (snow) for managing Snowflake connections. This is separate from the scai CLI.

Configuration:

# Add a Snowflake connection
snow connection add

# Set default Snowflake connection
snow connection set-default <connection-name>

Usage in scai:

# Deploy with specific connection
scai code deploy -c my-snowflake

# AI convert with specific connection
scai ai-convert start -c my-snowflake

Connection precedence (highest to lowest):

  1. -c/--connection option on the scai command

  2. Project connection (set by scai project set-default-connection or scai init -c)

  3. Default TOML connection (set by snow connection set-default)

For more details on configuring Snowflake connections, see the Snowflake CLI connection documentation.