SnowConvert AI: Data validation [Preview]¶
Data Validation uses the data-validation skill from the migration-plugin <https://docs.snowflake.com/en/migrations/snowconvert-docs/general/user-guide/snowconvert/migration-skill/skill>_. For more details, contact snowconvert-support@snowflake.com.
The Data Validation feature of SnowConvert AI provides a fault-tolerant, scalable way to verify that data migrated into Snowflake matches the data in the original source system. It runs as a Cloud Data Validation workflow on the same infrastructure used by Data Migration, so you can migrate and validate with the same Orchestrator and Workers.
Cloud Data Validation is designed for migration scenarios where you are moving data from a system that you plan to decommission and need confidence that the migrated data is correct before cutting over. Supported source platforms are SQL Server, Amazon Redshift, and Teradata.
Architecture overview¶
Cloud Data Validation uses the same two components as Data Migration: an Orchestrator and one or more Workers.
The Orchestrator connects to the Snowflake account. It requires privileges to create and operate the
SNOWCONVERT_AIdatabase, where workflow, task, and validation metadata is stored.One or more Workers connect to both the source system and the Snowflake account. Workers run validation queries on both sides, export intermediate results, upload them to Snowflake, and write their outcomes to shared results tables. Workers pick up tasks created by the Orchestrator and process them in parallel.
Validation results (schema, metrics, and row-level outcomes) are ingested into the shared results tables via Snowpipe by default and then evaluated by the Orchestrator.
Each Worker needs the optional snowflake-data-validation package installed in its Python environment to execute validation tasks. Workers without it can still run Data Migration tasks but skip any data_validation tasks queued for them.
Deployment options¶
The Orchestrator and Workers can be deployed in multiple ways:
Both on Snowpark Container Services (in the Snowflake account).
Both in the customer’s environment, including custom hardware, virtual machines, or containers.
Orchestrator on Snowpark Container Services and Workers in the customer’s environment, or the other way around.
The following requirements apply to the environment where the Orchestrator and Workers run:
The Orchestrator and Workers are Python packages, so Python 3.11 or higher must be installed.
Workers typically require an ODBC driver to connect to the source system. For Teradata, the pure-Python
teradatasqldriver is supported and preferred when installed.The Orchestrator must be able to connect to the Snowflake account using a role that has privileges to create the
SNOWCONVERT_AIdatabase and create schemas and objects within it.
Validation levels¶
Cloud Data Validation performs comparisons between each source table (or view) and its corresponding target in Snowflake at three increasingly detailed levels. Each level can be enabled or disabled independently in the workflow configuration.
Schema validation (L1)¶
Schema validation confirms that the structure of each migrated table is preserved in Snowflake. It compares the following attributes between source and target:
Table name
Column names
Ordinal position of each column
Data types
Character maximum length for text columns
Numeric precision and scale for numeric columns
Row count
Metrics validation (L2)¶
Metrics validation confirms that aggregate statistics of the migrated data match the original source. Specific metrics vary by column data type, but metrics validation typically compares:
Minimum value
Maximum value
Average
Null count
Distinct count
Standard deviation
Variance
Row validation (L3)¶
Row validation performs row-level or cell-level comparison between source and target. It supports two modes:
row: compares entire rows identified by the configuredindex_column_list.cell: compares individual cell values and reports the specific columns that differ.
A hybrid mode (two-phase MD5 plus cell drilldown) is also available and is used automatically when the source and target support it, so that only rows flagged by the first phase are re-examined at the cell level. Row validation is disabled by default and is typically applied only to the tables where it is needed because it is the most resource-intensive level.
Prerequisites¶
Before you use Cloud Data Validation, make sure the following are in place:
Python environment: Python 3.11 or higher, with the
snowflake-data-migration-orchestrator,snowflake-data-exchange-agent, andsnowflake-data-validationpackages installed (see Installation).Snowflake access: Connections for the Orchestrator and Workers in your Snowflake
config.tomlorconnections.toml, using a role that can create theSNOWCONVERT_AIdatabase and its objects. The first time the Orchestrator starts, it creates that database and related resources if they don’t exist yet. On later runs, use a role that can administerSNOWCONVERT_AIand its objects; sticking with the same role you used for the initial creation is the simplest way to avoid permission issues.Source connectivity: For SQL Server and Redshift sources, an ODBC driver on the machine where Workers run. For Teradata, install the
teradatasqlPython driver (preferred) or configurepyodbcwith the Teradata ODBC driver. Programmatic Access Tokens (PATs) are recommended for Snowflake connections; see Connecting to Snowflake with a PAT.Target data available: The tables you want to validate must already exist in Snowflake, either because you migrated them using Data Migration or by another means. For accurate validation, don’t alter the migrated data between the migration and the validation run.
Snowpark Container Services (optional): If you deploy the Orchestrator or Workers on Snowflake compute, your account needs SPCS. See the Snowpark Container Services overview. Running both components outside Snowflake doesn’t require SPCS.
Warning
For accurate validation and to avoid false negatives, don’t alter the migrated data during the validation process.
Setup¶
Installation¶
Install the three Python packages (Python 3.11 or higher) on the machines where the Orchestrator and Workers will run:
Keep the following in mind:
The Orchestrator host only needs
snowflake-data-migration-orchestrator.Workers that will execute Cloud Data Validation tasks need both
snowflake-data-exchange-agentandsnowflake-data-validation. Workers withoutsnowflake-data-validationcan still process Data Migration tasks but skipdata_validationtasks.On a single host that runs both components, install all three packages in the same Python environment.
Usage¶
To validate migrated data using this solution, complete the following high-level steps:
Start the Orchestrator.
Start the Workers.
Create a Cloud Data Validation Workflow.
Monitor the validation workflow until completion.
A Cloud Data Validation Workflow is a job submitted to the system that describes which tables to validate, at which levels, and with which comparison rules. You can submit multiple workflows simultaneously and monitor them. The Orchestrator breaks each workflow into smaller tasks, one or more per table, and dispatches them to available Workers.
The Orchestrator that runs validation is the same process that runs Data Migration. If you have already set up and started an Orchestrator and Workers for migration, you can reuse them for validation: make sure snowflake-data-validation is installed on the Workers, then create a validation workflow as described in Creating a Cloud Data Validation Workflow.
Starting the Orchestrator¶
Start the Orchestrator by running:
When invoked without a subcommand, start is assumed for backward compatibility:
The start command accepts optional logging flags:
Flag |
Default |
Description |
|---|---|---|
|
|
Where to send logs: |
|
|
Path to the log file (used when destination includes |
|
|
Logging level: |
For example, to run with verbose logging to stdout only:
Before running, make sure the SNOWFLAKE_CONNECTION_NAME environment variable is set to a value that matches one of the connection names in your Snowflake config.toml or connections.toml. That is the name of the connection used to connect to the target Snowflake account.
By default, workflow and task metadata objects are created under SNOWCONVERT_AI.DATA_MIGRATION, and data validation metadata is created under SNOWCONVERT_AI.DATA_VALIDATION. To use a different database or schema for that metadata, set the following environment variables before starting the Orchestrator:
CUSTOM_SNOWFLAKE_DATABASE_FOR_METADATA(defaultSNOWCONVERT_AI)CUSTOM_SNOWFLAKE_SCHEMA_FOR_DATA_MIGRATION_METADATA(defaultDATA_MIGRATION)CUSTOM_SNOWFLAKE_SCHEMA_FOR_DATA_VALIDATION_METADATA(defaultDATA_VALIDATION)
If you override these values, set the same database and migration schema in each Worker’s configuration using snowflake_database_for_metadata and snowflake_schema_for_data_migration_metadata under [application] (see Worker configuration).
The Orchestrator runs until you stop it. Cloud Data Validation Workflows require an active Orchestrator to complete. However, the Orchestrator can be safely stopped at any point and resumed later; ongoing workflows resume at that point.
Starting the Workers¶
The Worker (the data-exchange-agent) provides two subcommands: run (default) and test.
Run¶
Start a Worker that processes tasks:
The run subcommand accepts the following flags:
Flag |
Short |
Default |
Description |
|---|---|---|---|
|
|
|
Path to the TOML configuration file. |
|
|
from config |
Maximum number of parallel tasks. |
|
|
from config |
Interval (seconds) between task fetch attempts. |
|
|
Host to bind the HTTP server to. |
|
|
|
|
Port to bind the HTTP server to. |
|
off |
Run task handling only, without starting the HTTP server. |
|
|
from config |
Base directory for exported files before upload. |
|
|
|
off |
Enable debug mode. |
See Worker configuration for the configuration file specification.
Test¶
Before starting a Worker for real, verify that every configured source and target connection works:
This runs SELECT 1 on every configured connection and reports the results.
Workers run until you stop them. Cloud Data Validation Workflows require at least one active Worker with snowflake-data-validation installed. Workers can be safely stopped at any point and resumed later; ongoing workflows resume at that point.
Creating a Cloud Data Validation Workflow¶
After installing the Orchestrator, create a validation workflow by running the following command:
Keep the following in mind:
The validation configuration specification can be found in Validation workflow configuration reference.
--source-platformis required. Supported values aresqlserver,redshift, andteradata. The value must match thesource_platformfield in the JSON configuration file.--nameis optional. The workflow name must be alphanumeric and cannot start with a digit. Defaults toMY_VALIDATION_WORKFLOWwhen omitted.--connection-nameis optional. When omitted, the Orchestrator uses the default Snowflake connection from environment variables. When provided, it should match a named connection in yourconfig.tomlorconnections.tomlfile.
For example, to create a named SQL Server validation workflow using a specific Snowflake connection:
New workflow rows are inserted into the WORKFLOW table in the data migration metadata schema (default SNOWCONVERT_AI.DATA_MIGRATION) with WORKFLOW_TYPE set to data-validation. Validation results and related objects are stored under the data validation metadata schema (default SNOWCONVERT_AI.DATA_VALIDATION).
Monitoring a Cloud Data Validation Workflow¶
Each workflow goes through different stages throughout its lifecycle:
Pending: No tasks have been created for this workflow yet.
Executing: Tasks have been created for this workflow and there are still tasks that haven’t reached a terminal state (
COMPLETEDorFAILED).Completed: All tasks have reached a terminal state (
COMPLETEDorFAILED).
In the data validation metadata schema (default SNOWCONVERT_AI.DATA_VALIDATION), the following views can be queried to understand the status of validation workflows:
View |
Description |
|---|---|
|
One row per validated table. Summarizes overall validation status. Can be filtered by |
|
Per-table breakdown with partition-level L2/L3 status ( |
|
Errors encountered during validation. Can be filtered by |
|
Non-fatal warnings, for example, unsupported column types or metric exclusions. Can be filtered by |
In the same schema, the DATA_VALIDATION_DASHBOARD Streamlit dashboard provides a visual overview of validation progress and results, including a Table Progress tab that aggregates the views above.
You can also inspect validation queries in QUERY_HISTORY using the QUERY_TAG values set by the Orchestrator and Workers. See Query tagging.
Validation outcomes are classified into three categories:
Category |
Description |
|---|---|
OK |
Values match exactly between the source database and Snowflake. |
Warning |
The Snowflake table has minor differences that don’t affect the data (for example, higher numeric precision). |
Error |
Values don’t match between the original database and the Snowflake database. |
Considerations and recommendations¶
Connecting to Snowflake with a PAT¶
Use Programmatic Access Tokens (PATs) for connections used by the Orchestrator and Workers. This avoids the need to constantly authenticate through the browser or with an authenticator app. You need to establish a Network Policy or temporarily bypass the requirement for a Network Policy (which you can do from Snowsight).
Running Orchestrator and Workers on SPCS¶
To leverage Snowflake compute for these tasks:
Prepare Docker images that use the Python modules with the appropriate configuration.
Push those Docker images to an Image Repository in Snowflake.
Execute the Orchestrator and/or Worker images using Snowpark Container Services.
Keep the following in mind:
Run them as Services, not Jobs.
You can run only one component (Orchestrator or Workers) in SPCS and the other on a different platform.
Monitor the SPCS service and suspend it when it isn’t being used.
Depending on the network configuration of the source system, you might need to configure an External Access Integration so that these services can connect to your source system.
Initial testing¶
For an early test run, use a separate validation configuration whose tables array lists only the table or small set of tables you want to validate. On each of those entries:
Set
where_clause(and optionallytarget_where_clause) to an SQL-like predicate so only a subset of rows is validated, for example, a bounded primary-key range or a narrow date range in the source dialect.Keep
validation_configuration.row_validationdisabled for the first run, and enable it later on a smaller subset of tables.Use a small
target_partition_size_mbortarget_partition_size_rowsto keep partitions tiny during the test.
After you confirm connectivity and results, create your full workflow: remove or relax where_clause, adjust partition sizes, and enable row validation only on the tables where it is needed.
Managing Workers¶
The time it takes to complete a validation workflow depends on many variables, but the number of Workers (and threads per Worker) has the greatest impact, as it determines how many validation tasks can be executed in parallel. Consider the following:
You don’t need to run two Workers on the same machine. If you want more parallelism on a single machine, increase the thread count instead (
--max-parallel-tasksondata-exchange-agent run).Network bandwidth greatly affects Worker speed and is shared between threads of a Worker.
Even with many Workers and threads running in parallel, the source system might not have enough resources to handle the load.
Keep a low Worker count to avoid overloading your source system.
Consider stopping some or all Workers when the source system is already under heavy load from other operations.
Query tagging¶
Both the Orchestrator and the Worker automatically set Snowflake’s QUERY_TAG session parameter on every query they submit. Tags are compact JSON strings containing identifiers such as the workflow ID, task ID, and component version. You can use these tags to filter and attribute validation queries in QUERY_HISTORY:
Tag key |
Present on |
Description |
|---|---|---|
|
Infrastructure queries |
Component package version. |
|
Task-processing queries |
Workflow that originated the task. |
|
Task-processing queries |
Individual task identifier. |
|
Orchestrator task-processing queries |
Orchestrator package version. |
|
Worker task-processing queries |
Worker package version. |
Configuration reference¶
Validation workflow configuration reference¶
The validation workflow configuration file is a JSON object. The following sections describe its structure and properties.
Note
Names that require quoting (or brackets) must be manually quoted as they would normally be in JSON. For example: "tableName": "\"MyCaseSensitiveTable\"".
Top-level object¶
Property |
Type |
Required |
Description |
|---|---|---|---|
|
|
Yes |
Source dialect identifier: |
|
|
Defaults to |
|
|
|
Default target database name for tables that don’t specify one. |
|
|
|
Global validation levels and options. See Validation configuration. |
|
|
|
Numeric tolerance and optional type mapping file. See Comparison configuration. |
|
|
|
Map of source database names to Snowflake database names. |
|
|
|
Map of source schema names to Snowflake schema names. |
|
|
|
Yes |
At least one table to validate. See Per-table and per-view entry. |
|
|
Additional view entries. Uses the same shape as |
|
|
|
When |
|
|
|
Desired rows per partition. Mutually exclusive with |
|
|
|
Desired MB per partition. Mutually exclusive with |
|
|
|
When |
Validation configuration¶
The validation_configuration object sets global defaults for which validation levels to run. Any field set here can be overridden per table by nesting a validation_configuration object on that table entry.
When validation_configuration is omitted, the Orchestrator applies these defaults: schema validation and metrics validation are enabled; row validation is disabled; row_validation_mode defaults to row; continue_on_failure defaults to false; max_failed_rows_number defaults to 100; exclude_metrics defaults to false; apply_metric_column_modifier defaults to true.
Property |
Type |
Description |
|---|---|---|
|
|
Level 1: schema and column consistency checks. |
|
|
Level 2: statistical metrics comparison. |
|
|
Level 3: row-level or cell-level data comparison. |
|
|
For row validation: typically |
|
|
Whether to continue to the next validation level after a failure. |
|
|
Cap on failed rows reported for L3 validation. Must be greater than 0 when set. |
|
|
Whether to exclude unsupported metric columns. |
|
|
Whether to apply metric column modifiers. |
Comparison configuration¶
The comparison_configuration object controls numeric tolerance and optional type-mapping overrides used during comparisons.
Property |
Type |
Description |
|---|---|---|
|
|
Numeric comparison tolerance for metrics. Must be greater than 0 when set. Default |
|
|
Optional path to a custom type mapping file for comparisons. |
Per-table and per-view entry¶
Each object in tables (or views) describes one database object to validate.
Property |
Type |
Required |
Description |
|---|---|---|---|
|
|
Yes |
Source object name. The exact format depends on the source platform. |
|
|
Default |
|
|
|
Columns to include or exclude, depending on |
|
|
|
Target object name override. |
|
|
|
Per-table target database override. |
|
|
|
Per-table target schema override. |
|
|
|
Filter on the source side. |
|
|
|
Filter on the target side. |
|
|
|
Columns used to align rows on the source. |
|
|
|
Columns used to align rows on the target. |
|
|
|
Map of source column name to target column name. |
|
|
|
Case sensitivity for identifiers. |
|
|
|
Chunking hint for validation. Must be greater than 0 when set. |
|
|
|
Overrides the global cap for this object. |
|
|
|
Per-object metrics exclusion override. |
|
|
|
Per-object modifier override. |
|
|
|
Typically |
|
|
|
Columns used for range-based ( |
|
|
|
Per-table override for desired rows per partition. Mutually exclusive with |
|
|
|
Per-table override for desired MB per partition. Mutually exclusive with |
|
|
|
Nested object with the same fields as global |
Partitioning¶
When column_names_to_partition_by is set, the Orchestrator splits the table into range-based partitions. Cloud Data Validation and Data Migration share the same sizing logic:
Compute a target rows-per-partition from whichever user setting is provided. The two settings are mutually exclusive:
target_partition_size_rowsis used as-is.target_partition_size_mbis converted to rows usingtarget_mb / avg_row_mb.If neither is set, Cloud Data Validation defaults to 200 MB per partition.
Apply an internal cap. System-imposed maximums (not user-configurable) limit partition size to safe infrastructure bounds.
Derive the partition count as
ceil(row_count / effective_rows_per_partition), or 1 when the entire table fits in a single partition.
Example: SQL Server validation¶
The following configuration runs schema and metrics validation on two SQL Server tables, overrides defaults on the second table, and enables row-level validation there:
Example: Redshift validation after UNLOAD migration¶
The following configuration validates Redshift tables that were migrated using the UNLOAD extraction strategy. Schema and metrics validation are enabled globally, and row validation is enabled on a single high-value table:
For the corresponding migration workflow, see Redshift UNLOAD in the Data Migration page.
Example: Teradata validation¶
The following configuration validates Teradata tables using schema and metrics validation, with a custom tolerance and per-table partition sizing:
Worker configuration¶
The Worker configuration file uses TOML format and is shared between Data Migration and Cloud Data Validation. Workers that execute validation tasks need the snowflake-data-validation package installed in the same Python environment.
Section |
Property |
Type |
Description |
|---|---|---|---|
Top level |
|
|
Required. Must be |
|
|
|
Maximum number of tasks the Worker processes in parallel using threads. |
|
|
|
Interval in seconds between attempts to fetch new tasks from the Orchestrator. |
|
|
|
Optional. Interval in seconds between task lease renewals. Default |
|
|
|
Optional. A user-defined affinity for the Worker. |
|
|
|
Optional. Database where the Orchestrator deployed the task queue. Default |
|
|
|
Optional. Schema for |
|
|
|
Optional. Base directory where each task’s exported Parquet or CSV files are written before upload. Defaults to |
|
N/A |
|
Source system connection. Workers typically require an ODBC driver to connect to the source. |
|
|
|
The name of the connection entry in your |
When selected_task_source is "snowflake_stored_procedure", the Worker calls task-queue stored procedures using application.snowflake_database_for_metadata and application.snowflake_schema_for_data_migration_metadata. These values are independent of the Snowflake session defaults (SNOWFLAKE_DATABASE, SNOWFLAKE_SCHEMA) in the connection profile.
An example configuration file looks like this:
Note
Only one source connection is needed.
Source connection configuration examples¶
The following examples show the supported source connection types for Cloud Data Validation.
1. SQL Server (standard authentication)
The Worker automatically detects the best available ODBC driver for SQL Server, preferring newer versions (ODBC Driver 18 > 17 > 13 > 11). To manually select a driver, set odbc_driver to the exact name returned by pyodbc.drivers():
The encrypt and trust_server_certificate parameters are optional. When omitted, the ODBC driver uses its default behavior:
ODBC Driver 17 and below: encryption is disabled by default.
ODBC Driver 18 and above: encryption is mandatory by default.
For development environments or SQL Servers without encryption support, either omit the encryption parameters or set encrypt = false.
2. Amazon Redshift (IAM authentication)
3. Amazon Redshift (standard authentication)
4. Teradata
The Worker supports two Teradata drivers and automatically selects the best one available:
teradatasql(preferred). Pure Python driver, no OS-level ODBC installation required. Install withpip install teradatasql.ODBC fallback. If
teradatasqlisn’t installed, the Worker falls back topyodbcwith the Teradata ODBC driver. Setdriver_nameto the exact name returned bypyodbc.drivers().
When teradatasql is available, driver_name is ignored and no ODBC driver needs to be installed on the host. Use dbc_name when your Teradata COP or TDPID alias differs from host.
Command reference¶
The following table lists every CLI command used to set up and operate Cloud Data Validation.
Command |
Purpose |
|---|---|
|
Install the Orchestrator package. |
|
Install the Worker package. |
|
Install the optional validation dependency required on Workers to process validation tasks. |
|
Start the Orchestrator. |
|
Start the Orchestrator with custom logging settings. |
|
Start the Orchestrator (backward-compatible; |
|
Create a Cloud Data Validation Workflow. |
|
Create a Cloud Data Migration Workflow on the same Orchestrator. |
|
Start a Worker with a specific configuration file. |
|
Start a Worker using |
|
Start a Worker (backward-compatible; |
|
Start a Worker with a custom thread count and HTTP server port. |
|
Start a Worker that processes tasks only (no HTTP server). |
|
Start a Worker with a custom export directory. |
|
Start a Worker in debug mode. |
|
Test every configured source and target connection ( |