SnowConvert AI: Data migration [Preview]¶
The Data Migration feature of SnowConvert provides a fault-tolerant, scalable solution for moving data from external sources into Snowflake. This feature is specifically designed for cases where you are moving data from a system you plan to decommission. For replication purposes, other solutions are available that might better fit your use case.
Architecture overview¶
The Data Migration feature uses two main components: an Orchestrator and one or more Workers.
The Orchestrator connects to the Snowflake account. It requires privileges to create and operate the
SNOWCONVERT_AIdatabase, where metadata is stored.One or more Workers connect to both the source system and the Snowflake account. Workers read data from the source system and upload it to a Snowflake stage. Workers pick up tasks created by the Orchestrator and process them in parallel.
Files uploaded to the Snowflake stage are copied into the target tables using a
COPY INTOstatement submitted and monitored by the Orchestrator.
Deployment options¶
The Orchestrator and Workers can be deployed in multiple ways:
Both on Snowpark Container Services (in the Snowflake account).
Both in the customer’s environment, including custom hardware, virtual machines, or containers.
Orchestrator on Snowpark Container Services and Workers in the customer’s environment, or the other way around.
The following requirements apply to the environment where the Orchestrator and Workers run:
The Orchestrator and Workers are Python packages, so Python 3.11 or higher must be installed.
Workers typically require an ODBC driver to connect to the source system.
The Orchestrator must be able to connect to the Snowflake account using a role that has privileges to create the
SNOWCONVERT_AIdatabase and create schemas and objects within it.
Prerequisites¶
Before you use Data Migration, make sure the following are in place:
Python environment: Python 3.11 or higher, with the
snowflake-data-migration-orchestratorandsnowflake-data-exchange-agentpackages installed (see Installation).Snowflake access: Connections for the Orchestrator and Workers in your Snowflake
config.tomlorconnections.toml, using a role that can create theSNOWCONVERT_AIdatabase and its objects. The first time the Orchestrator starts, it creates that database and related resources if they do not exist yet. On later runs, use a role that can administerSNOWCONVERT_AIand its objects; sticking with the same role you used for the initial creation is the simplest way to avoid permission issues.Source connectivity: For typical source databases, an ODBC driver on the machine where Workers run. Programmatic Access Tokens (PATs) are recommended for Snowflake connections; see Connecting to Snowflake with a PAT.
Hybrid Tables: Hybrid Tables must be enabled and available in your Snowflake account and region for this feature. Review Hybrid tables and Hybrid tables limitations so you understand relevant platform requirements.
Snowpark Container Services (optional): If you deploy the Orchestrator or Workers on Snowflake compute, your account needs SPCS. See the Snowpark Container Services overview. Running both components outside Snowflake does not require SPCS.
Setup¶
Installation¶
Two Python packages are used for installation (Python 3.11 or higher):
snowflake-data-migration-orchestrator: For the Orchestrator.snowflake-data-exchange-agent: For Workers.
Alternatively, you can install the SnowConvert AI (SCAI) CLI and use its commands directly. This requires Snowpark Container Services (SPCS) to be enabled on the Snowflake account.
Usage¶
To migrate data using this solution, complete the following high-level steps:
Start the Orchestrator.
Start the Workers.
Create a Data Migration Workflow.
Monitor the Data Migration Workflow until completion.
A Data Migration Workflow is essentially an action or goal for the system to complete, such as migrating a specific set of tables with a given configuration. You can submit multiple workflows simultaneously and monitor them. The Orchestrator breaks Data Migration Workflows into smaller tasks, which typically involves splitting a table into partitions before extracting its data and loading it to Snowflake.
Using Python packages¶
Starting the Orchestrator¶
After installing the Orchestrator, start it by running the following command:
Before running, make sure the SNOWFLAKE_CONNECTION_NAME environment variable is set to a value that matches one of the connection names in your Snowflake config.toml or connections.toml. That is the name of the connection used to connect to the target Snowflake account.
By default, workflow and task metadata objects are created under SNOWCONVERT_AI.DATA_MIGRATION. To use a different database or schema for that metadata, set the environment variables CUSTOM_SNOWFLAKE_DATABASE_FOR_METADATA (default SNOWCONVERT_AI) and CUSTOM_SNOWFLAKE_SCHEMA_FOR_DATA_MIGRATION_METADATA (default DATA_MIGRATION) before starting the Orchestrator. If you override these values, set the same database and schema in each Worker’s configuration using snowflake_database_for_metadata and snowflake_schema_for_data_migration_metadata under [application] (see Worker configuration).
The Orchestrator runs until you stop it. Data Migration Workflows require an active Orchestrator to complete. However, the Orchestrator can be safely stopped at any point and resumed later; ongoing Data Migration Workflows are resumed at that point.
Starting the Workers¶
After installing the Worker, start it by running the following command:
The path to the configuration file can be omitted. In that case, the Worker looks for a file called configuration.toml in your current directory. For the Worker configuration specification, see Worker configuration.
Workers run until you stop them. Data Migration Workflows require at least one active Worker to complete. However, Workers can be safely stopped at any point and resumed later; ongoing Data Migration Workflows are resumed at that point.
Creating a Data Migration Workflow¶
After installing the Orchestrator, create workflows by running the following command:
Keep the following in mind:
The Workflow Configuration specification can be found in Workflow configuration reference.
The workflow name must be composed of alphanumeric characters and cannot start with a digit.
You must specify the name of the Snowflake connection you want to use, as it appears in your
config.tomlorconnections.tomlfile.Supported source platforms are
sqlserverandredshift.
Monitoring a Data Migration Workflow¶
Each workflow goes through different stages throughout its lifecycle:
Pending: No tasks have been created for this workflow yet.
Executing: Tasks have been created for this workflow and there are still tasks that have not reached a terminal state (
COMPLETEDorFAILED).Completed: All tasks have reached a terminal state (
COMPLETEDorFAILED).
In the SNOWCONVERT_AI.DATA_MIGRATION schema, the following tables and views can be queried to understand the status of one or more workflows:
WORKFLOW: Contains one row per workflow, including start/end time, status, and configuration.TABLE_PROGRESS_WITH_EXAMPLE_ERROR: Contains one row per table being migrated as part of a workflow. Includes information about how many partitions are in each stage (extraction, loading, completed, or failed), as well as related errors. Can be filtered byWORKFLOW_ID.DATA_MIGRATION_ERROR: For each partition of a table being migrated, contains the first known error that affected the migration of that partition. Can be filtered byWORKFLOW_ID.
In the same schema, the DATA_MIGRATION_DASHBOARD Streamlit dashboard can be used to monitor workflows. This dashboard presents data from those tables and views.
Using SCAI CLI¶
If you want to get access to Data Migration through the SCAI CLI, please contact us at snowconvert-support@snowflake.com.
Redshift UNLOAD¶
For Redshift, it is recommended to use the unload extraction strategy. This works as follows:
Large query results are written directly to an S3 bucket instead of being downloaded to the machine running the Worker.
On the Snowflake side, an external stage is configured to reference the corresponding S3 bucket, so that
COPY INTOstatements can be executed directly from that stage.
For configuration details, see ExtractionStrategy model.
Incremental synchronization¶
You can migrate tables and then re-migrate them in the future, moving only the data that has changed. For more details, see SynchronizationStrategy model.
Considerations and recommendations¶
Connecting to Snowflake with a PAT¶
It is recommended to use Programmatic Access Tokens (PAT) for connections used by the Orchestrator and Workers. This ensures there is no need to constantly authenticate through the browser or with an authenticator app. You will need to establish a Network Policy or temporarily bypass the requirement for a Network Policy (this can be done from Snowsight).
Running Orchestrator and Workers on SPCS¶
To leverage Snowflake compute for these tasks:
Prepare Docker images that use the Python modules with the appropriate configuration.
Push those Docker images to an Image Repository in Snowflake.
Execute the Orchestrator and/or Worker images using Snowpark Container Services.
Keep the following in mind:
It is recommended to execute them as Services, not Jobs.
It is possible to run only one component (Orchestrator or Workers) in SPCS and the other on a different platform.
It is a good practice to monitor the SPCS service and suspend it when it is not being used.
Depending on the network configuration of the source system, you may need to configure an External Access Integration so that these services can connect to your source system.
Initial testing¶
It is recommended to deploy the DDL for the tables you want to migrate before starting data migration. This ensures the target type matches the behavior you want to see in the table and its related views and procedures. You can convert DDL from your source dialect into Snowflake SQL using the code conversion capabilities of SnowConvert AI and/or Cortex Code.
Note
If you don’t deploy the DDL for the tables before starting data migration, the types will be inferred, which may not be as accurate as required.
For an early test run, use a separate workflow configuration whose tables array lists only the table or small set of tables you want to validate. On each of those entries, set whereClauseCriteria to an SQL-like predicate (as you would in a WHERE clause) so only a subset of rows is migrated, for example a bounded primary-key range or a narrow date range in the source dialect. You can also set a small partitionSize (for example maxRowsPerPartition) to keep partitions tiny during the test. After you confirm connectivity, performance, and results, create your full workflow: remove or relax whereClauseCriteria and use "auto" or your production partitionSize settings.
Managing Workers¶
The time it takes to complete a workflow depends on many variables, but the number of Workers (and threads per Worker) has the greatest impact, as it determines how many extraction tasks can be executed in parallel. Consider the following:
It is not necessary to run two Workers on the same machine. If you want more parallelism on a single machine, increase the thread count instead.
Network bandwidth greatly affects Worker speed and is shared between threads of a Worker.
Even with many Workers and threads running in parallel, the source system might not have enough resources to handle the load.
Keep a low Worker count to avoid overloading your source system.
Consider stopping some or all Workers when the source system is already under heavy load from other operations.
Configuration reference¶
Workflow configuration reference¶
The Data Migration Workflow configuration file is a JSON object. The following sections describe its structure and properties.
Note
Names that require quoting (or brackets) must be manually quoted as they would normally be in JSON. For example: tableName: "\"MyCaseSensitiveTable\"".
Top-level object¶
Property |
Type |
Required |
Description |
|---|---|---|---|
|
|
An array of table-specific configurations defining which tables to migrate and how. |
|
|
|
Shared settings that are inherited by all tables in the |
|
|
|
An affinity group string. Ensures that only Orchestrator and Worker instances with a matching affinity process this workflow. |
When defaultTableConfiguration is present, each object in tables is merged with those defaults: shared fields apply to every table unless the same field is set again on a specific table entry, in which case the table-level value wins.
TableConfiguration model¶
Defines the settings for migrating a single table.
Property |
Type |
Required |
Description |
|---|---|---|---|
|
|
Identifies the source table. |
|
|
|
Identifies the target table in Snowflake. |
|
|
|
A list of columns used to partition data during the extraction phase. |
|
|
|
Settings to configure how data is extracted from the source database. |
|
|
|
Settings for incremental synchronization. |
|
|
|
Type conversions applied during migration. |
|
|
|
Column renaming mappings. |
|
|
|
Primary key columns for the source table. Required for |
|
|
|
Configures the target size of each partition during extraction. Defaults to |
|
|
|
An SQL-like filter to select a subset of rows for migration (for example, |
SourceTargetIdentifier model¶
A nested object used within TableConfiguration to specify a database object. For source, use only the properties in the following table. For target, you can also set the optional properties in Additional target properties.
Property |
Type |
Required |
Description |
|---|---|---|---|
|
|
The name of the source or target database. |
|
|
|
The name of the schema containing the table. |
|
|
|
The name of the table to be migrated. |
Additional target properties¶
The following optional fields apply only to the target object (not to source).
Property |
Type |
Required |
Description |
|---|---|---|---|
|
|
|
|
|
|
For Iceberg targets |
Required when |
Iceberg configuration (target.icebergConfig)¶
Used when target.tableType is "iceberg". Account setup (external volumes, catalog integrations, stages, and privileges) follows Snowflake’s Iceberg documentation; see Apache Iceberg™ tables, Create an Iceberg table, and Configure an external volume.
Property |
Type |
Required |
Description |
|---|---|---|---|
|
|
Default |
|
|
|
For |
Snowflake external volume for Iceberg data and metadata. |
|
|
Optional path prefix for |
|
|
|
For external |
Fully qualified name of the table in the external catalog (for example |
|
|
Optional catalog integration used to sync Snowflake-managed metadata back to an external catalog. |
|
|
|
Stage path starting with |
|
|
|
One of |
Partition size (partitionSize)¶
Controls how large each partition should be during extraction. You can use a string or an object.
Form |
Description |
|---|---|
|
The system chooses partition sizes from the source platform, extraction strategy, and table size. Auto mode uses larger partitions for Redshift UNLOAD (S3-friendly large files) and smaller partitions for ODBC-based extraction (SQL Server, Redshift regular), where data flows through the Worker. For very large tables (100+ GB), the maximum number of partitions can increase to allow more parallelism. |
|
Each partition targets about |
|
Each partition contains at most |
When you use the object form, specify only one of targetSizeMb or maxRowsPerPartition.
auto (default):
Target size in MB:
Maximum rows per partition:
ColumnTypeMapping model¶
A nested object used within TableConfiguration to specify type mappings for a column.
Property |
Type |
Required |
Description |
|---|---|---|---|
|
|
The name of the type in the source system. |
|
|
|
The name of the target type in Snowflake. |
ColumnNameMapping model¶
A nested object used within TableConfiguration to specify column name mappings.
Property |
Type |
Required |
Description |
|---|---|---|---|
|
|
The name of the column in the source system. |
|
|
|
The name of the target column in Snowflake. |
ExtractionStrategy model¶
Configures the method for data extraction.
Field |
Type |
Required |
Description |
|---|---|---|---|
|
|
The extraction method. |
|
|
|
UNLOAD only |
The name of the Snowflake external stage to use when |
Extraction: regular (Default)
Extraction: unload (Redshift only)
SynchronizationStrategy model¶
Configures the approach for incremental data syncing on subsequent runs.
Field |
Type |
Required |
Description |
|---|---|---|---|
|
|
The synchronization method. |
|
|
|
|
Column name to track. Must be monotonically increasing. |
|
|
If |
Strategy: none (Default)
Performs a full extraction of all partition data on every run. No synchronization metadata is stored.
Use when data is small, changes are unpredictable, or guaranteed consistency is needed.
Strategy: checksum
Computes a hash of all column values for each partition on the source. Only changed partitions are cleared and re-extracted in the target.
Use when you need to detect any change in a partition but lack a reliable monotonic column (for example, dimension tables). Note that this requires a checksum computation on the source for every partition on every run.
Strategy: watermark
Tracks a monotonic column (timestamp, ID, or version) to sync only rows where the watermark value is greater than the maximum observed in the previous sync.
Use when your table has a reliable monotonic column that increases on insert/update (for example, fact tables or event logs).
Note
Watermark alone can’t currently track deletions. Support for this will be added in the future.
Example workflow: Redshift UNLOAD with Iceberg targets¶
The following workflow excerpt combines Redshift UNLOAD with Iceberg table targets, including Snowflake-managed Iceberg defaults and a per-table external catalog override:
Affinity¶
By specifying an affinity for a Workflow, you are indicating that you want specific workers to help with the execution of that Workflow. This can be particularly useful in cases in which you want to have some workers extract from one source and other workers extract from a different source. The rules for matching workers with tasks are:
A task without affinity will be picked up by any worker, independently of the worker’s affinity.
A worker without affinity will pick up any task, independently of the task’s affinity.
A task with a given affinity will not be picked up by a worker with different affinity.
Affinity only needs to be a String; its format is defined by the user. For example, all of these are valid: sql-server, DEV_SERVER, my_custom_server, ::blue::.
Worker configuration¶
This file configures the behavior and connections for the Workers (data_exchange_agent). You must set selected_task_source to "snowflake_stored_procedure" as shown below.
Section |
Property |
Type |
Description |
|---|---|---|---|
Top level |
|
|
Required. Must be |
|
|
|
The maximum number of tasks the Worker will process in parallel using threads. |
|
|
|
The interval in seconds between attempts to fetch new tasks from the Orchestrator. |
|
|
|
A user-defined affinity for the worker. |
|
|
|
Optional. Database where the Orchestrator created the task metadata objects (default |
|
|
|
Optional. Schema for data migration task metadata (default |
|
N/A |
|
Configuration for source systems. Workers typically require an ODBC driver to connect to the source system. |
|
|
|
The name of the connection entry in the |
An example configuration file looks like this:
Note
Only one source connection is needed.
Source connection configuration examples¶
The following examples show the three main source connection configurations:
1. SQL Server (Standard Authentication)
2. Amazon Redshift (IAM Authentication)
3. Amazon Redshift (Standard Authentication)
Platform-specific Details¶
Migrate Amazon Redshift data¶
In order to use the UNLOAD strategy for extraction of Amazon Redshift data, it will be necessary to set up multiple resources. This strategy enables the data to flow directly from Amazon Redshift into an S3 bucket and for Snowflake to execute COPY INTO operations directly from there (by creating an external stage that is mapped to that S3 bucket). This is faster than having the workers download the data and then upload it to a Snowflake stage.
Create a stage integration to S3¶
If you don’t have an existing stage configured, you need to create a Snowflake external stage that integrates with your S3 bucket. You can create the stage using the following SQL command in Snowflake:
Alternatively, if you’re using AWS credentials directly:
Replace the placeholders:
<stage_name>: Your desired stage name (for example,my_redshift_stage)<your_bucket_name>: Your S3 bucket name<path>: Optional path within the bucket<your_storage_integration>: Your Snowflake storage integration name (recommended method)<your_aws_key_id>and<your_aws_secret_key>: Your AWS IAM user credentials (if not using storage integration)
Note
Using a Snowflake storage integration is the recommended approach for better security and credential management. For more information about creating storage integrations, see the Snowflake documentation.
Verify stage integration¶
After setting up your stage, verify that the integration is working correctly before proceeding with data migration. You can verify the stage integration by running the following command in Snowflake:
This command should execute successfully without errors. If the stage is newly created and empty, it may return no results, which is expected.
To perform a more thorough verification, you can test the stage by uploading a test file:
If the commands execute successfully and you can see the uploaded file, your stage integration is configured correctly.
Note
You can also verify stage permissions by checking the stage description:
This displays the stage configuration, including the URL, credentials type, and file format settings.