SnowConvert AI: Data migration [Preview]

The Data Migration feature of SnowConvert provides a fault-tolerant, scalable solution for moving data from external sources into Snowflake. This feature is specifically designed for cases where you are moving data from a system you plan to decommission. For replication purposes, other solutions are available that might better fit your use case.

Architecture overview

The Data Migration feature uses two main components: an Orchestrator and one or more Workers.

  • The Orchestrator connects to the Snowflake account. It requires privileges to create and operate the SNOWCONVERT_AI database, where metadata is stored.

  • One or more Workers connect to both the source system and the Snowflake account. Workers read data from the source system and upload it to a Snowflake stage. Workers pick up tasks created by the Orchestrator and process them in parallel.

  • Files uploaded to the Snowflake stage are copied into the target tables using a COPY INTO statement submitted and monitored by the Orchestrator.

Deployment options

The Orchestrator and Workers can be deployed in multiple ways:

  • Both on Snowpark Container Services (in the Snowflake account).

  • Both in the customer’s environment, including custom hardware, virtual machines, or containers.

  • Orchestrator on Snowpark Container Services and Workers in the customer’s environment, or the other way around.

The following requirements apply to the environment where the Orchestrator and Workers run:

  • The Orchestrator and Workers are Python packages, so Python 3.11 or higher must be installed.

  • Workers typically require an ODBC driver to connect to the source system.

  • The Orchestrator must be able to connect to the Snowflake account using a role that has privileges to create the SNOWCONVERT_AI database and create schemas and objects within it.

Prerequisites

Before you use Data Migration, make sure the following are in place:

  • Python environment: Python 3.11 or higher, with the snowflake-data-migration-orchestrator and snowflake-data-exchange-agent packages installed (see Installation).

  • Snowflake access: Connections for the Orchestrator and Workers in your Snowflake config.toml or connections.toml, using a role that can create the SNOWCONVERT_AI database and its objects. The first time the Orchestrator starts, it creates that database and related resources if they do not exist yet. On later runs, use a role that can administer SNOWCONVERT_AI and its objects; sticking with the same role you used for the initial creation is the simplest way to avoid permission issues.

  • Source connectivity: For typical source databases, an ODBC driver on the machine where Workers run. Programmatic Access Tokens (PATs) are recommended for Snowflake connections; see Connecting to Snowflake with a PAT.

  • Hybrid Tables: Hybrid Tables must be enabled and available in your Snowflake account and region for this feature. Review Hybrid tables and Hybrid tables limitations so you understand relevant platform requirements.

  • Snowpark Container Services (optional): If you deploy the Orchestrator or Workers on Snowflake compute, your account needs SPCS. See the Snowpark Container Services overview. Running both components outside Snowflake does not require SPCS.

Setup

Installation

Two Python packages are used for installation (Python 3.11 or higher):

  • snowflake-data-migration-orchestrator: For the Orchestrator.

  • snowflake-data-exchange-agent: For Workers.

Alternatively, you can install the SnowConvert AI (SCAI) CLI and use its commands directly. This requires Snowpark Container Services (SPCS) to be enabled on the Snowflake account.

Usage

To migrate data using this solution, complete the following high-level steps:

  1. Start the Orchestrator.

  2. Start the Workers.

  3. Create a Data Migration Workflow.

  4. Monitor the Data Migration Workflow until completion.

A Data Migration Workflow is essentially an action or goal for the system to complete, such as migrating a specific set of tables with a given configuration. You can submit multiple workflows simultaneously and monitor them. The Orchestrator breaks Data Migration Workflows into smaller tasks, which typically involves splitting a table into partitions before extracting its data and loading it to Snowflake.

Using Python packages

Starting the Orchestrator

After installing the Orchestrator, start it by running the following command:

python -m data_migration_orchestrator

Before running, make sure the SNOWFLAKE_CONNECTION_NAME environment variable is set to a value that matches one of the connection names in your Snowflake config.toml or connections.toml. That is the name of the connection used to connect to the target Snowflake account.

By default, workflow and task metadata objects are created under SNOWCONVERT_AI.DATA_MIGRATION. To use a different database or schema for that metadata, set the environment variables CUSTOM_SNOWFLAKE_DATABASE_FOR_METADATA (default SNOWCONVERT_AI) and CUSTOM_SNOWFLAKE_SCHEMA_FOR_DATA_MIGRATION_METADATA (default DATA_MIGRATION) before starting the Orchestrator. If you override these values, set the same database and schema in each Worker’s configuration using snowflake_database_for_metadata and snowflake_schema_for_data_migration_metadata under [application] (see Worker configuration).

The Orchestrator runs until you stop it. Data Migration Workflows require an active Orchestrator to complete. However, the Orchestrator can be safely stopped at any point and resumed later; ongoing Data Migration Workflows are resumed at that point.

Starting the Workers

After installing the Worker, start it by running the following command:

python -m data_exchange_agent -c <configuration-file-path>

The path to the configuration file can be omitted. In that case, the Worker looks for a file called configuration.toml in your current directory. For the Worker configuration specification, see Worker configuration.

Workers run until you stop them. Data Migration Workflows require at least one active Worker to complete. However, Workers can be safely stopped at any point and resumed later; ongoing Data Migration Workflows are resumed at that point.

Creating a Data Migration Workflow

After installing the Orchestrator, create workflows by running the following command:

python -m data_migration_orchestrator create-data-migration-workflow <workflow-config-file-path> \
  --name <workflow-name> \
  --connection-name <connection-name> \
  --source-platform <source-platform>

Keep the following in mind:

  • The Workflow Configuration specification can be found in Workflow configuration reference.

  • The workflow name must be composed of alphanumeric characters and cannot start with a digit.

  • You must specify the name of the Snowflake connection you want to use, as it appears in your config.toml or connections.toml file.

  • Supported source platforms are sqlserver and redshift.

Monitoring a Data Migration Workflow

Each workflow goes through different stages throughout its lifecycle:

  • Pending: No tasks have been created for this workflow yet.

  • Executing: Tasks have been created for this workflow and there are still tasks that have not reached a terminal state (COMPLETED or FAILED).

  • Completed: All tasks have reached a terminal state (COMPLETED or FAILED).

In the SNOWCONVERT_AI.DATA_MIGRATION schema, the following tables and views can be queried to understand the status of one or more workflows:

  • WORKFLOW: Contains one row per workflow, including start/end time, status, and configuration.

  • TABLE_PROGRESS_WITH_EXAMPLE_ERROR: Contains one row per table being migrated as part of a workflow. Includes information about how many partitions are in each stage (extraction, loading, completed, or failed), as well as related errors. Can be filtered by WORKFLOW_ID.

  • DATA_MIGRATION_ERROR: For each partition of a table being migrated, contains the first known error that affected the migration of that partition. Can be filtered by WORKFLOW_ID.

In the same schema, the DATA_MIGRATION_DASHBOARD Streamlit dashboard can be used to monitor workflows. This dashboard presents data from those tables and views.

Using SCAI CLI

If you want to get access to Data Migration through the SCAI CLI, please contact us at snowconvert-support@snowflake.com.

Redshift UNLOAD

For Redshift, it is recommended to use the unload extraction strategy. This works as follows:

  • Large query results are written directly to an S3 bucket instead of being downloaded to the machine running the Worker.

  • On the Snowflake side, an external stage is configured to reference the corresponding S3 bucket, so that COPY INTO statements can be executed directly from that stage.

For configuration details, see ExtractionStrategy model.

Incremental synchronization

You can migrate tables and then re-migrate them in the future, moving only the data that has changed. For more details, see SynchronizationStrategy model.

Considerations and recommendations

Connecting to Snowflake with a PAT

It is recommended to use Programmatic Access Tokens (PAT) for connections used by the Orchestrator and Workers. This ensures there is no need to constantly authenticate through the browser or with an authenticator app. You will need to establish a Network Policy or temporarily bypass the requirement for a Network Policy (this can be done from Snowsight).

Running Orchestrator and Workers on SPCS

To leverage Snowflake compute for these tasks:

  1. Prepare Docker images that use the Python modules with the appropriate configuration.

  2. Push those Docker images to an Image Repository in Snowflake.

  3. Execute the Orchestrator and/or Worker images using Snowpark Container Services.

Keep the following in mind:

  • It is recommended to execute them as Services, not Jobs.

  • It is possible to run only one component (Orchestrator or Workers) in SPCS and the other on a different platform.

  • It is a good practice to monitor the SPCS service and suspend it when it is not being used.

  • Depending on the network configuration of the source system, you may need to configure an External Access Integration so that these services can connect to your source system.

Initial testing

It is recommended to deploy the DDL for the tables you want to migrate before starting data migration. This ensures the target type matches the behavior you want to see in the table and its related views and procedures. You can convert DDL from your source dialect into Snowflake SQL using the code conversion capabilities of SnowConvert AI and/or Cortex Code.

Note

If you don’t deploy the DDL for the tables before starting data migration, the types will be inferred, which may not be as accurate as required.

For an early test run, use a separate workflow configuration whose tables array lists only the table or small set of tables you want to validate. On each of those entries, set whereClauseCriteria to an SQL-like predicate (as you would in a WHERE clause) so only a subset of rows is migrated, for example a bounded primary-key range or a narrow date range in the source dialect. You can also set a small partitionSize (for example maxRowsPerPartition) to keep partitions tiny during the test. After you confirm connectivity, performance, and results, create your full workflow: remove or relax whereClauseCriteria and use "auto" or your production partitionSize settings.

Managing Workers

The time it takes to complete a workflow depends on many variables, but the number of Workers (and threads per Worker) has the greatest impact, as it determines how many extraction tasks can be executed in parallel. Consider the following:

  • It is not necessary to run two Workers on the same machine. If you want more parallelism on a single machine, increase the thread count instead.

  • Network bandwidth greatly affects Worker speed and is shared between threads of a Worker.

  • Even with many Workers and threads running in parallel, the source system might not have enough resources to handle the load.

  • Keep a low Worker count to avoid overloading your source system.

  • Consider stopping some or all Workers when the source system is already under heavy load from other operations.

Configuration reference

Workflow configuration reference

The Data Migration Workflow configuration file is a JSON object. The following sections describe its structure and properties.

Note

Names that require quoting (or brackets) must be manually quoted as they would normally be in JSON. For example: tableName: "\"MyCaseSensitiveTable\"".

Top-level object

Property

Type

Required

Description

tables

TableConfiguration[]

An array of table-specific configurations defining which tables to migrate and how.

defaultTableConfiguration

TableConfiguration

Shared settings that are inherited by all tables in the tables array. Table-specific values override these defaults.

affinity

String

An affinity group string. Ensures that only Orchestrator and Worker instances with a matching affinity process this workflow.

When defaultTableConfiguration is present, each object in tables is merged with those defaults: shared fields apply to every table unless the same field is set again on a specific table entry, in which case the table-level value wins.

TableConfiguration model

Defines the settings for migrating a single table.

Property

Type

Required

Description

source

SourceTargetIdentifier

Identifies the source table.

target

SourceTargetIdentifier

Identifies the target table in Snowflake.

columnNamesToPartitionBy

String[]

A list of columns used to partition data during the extraction phase.

extraction

ExtractionStrategy

Settings to configure how data is extracted from the source database.

synchronization

SynchronizationStrategy

Settings for incremental synchronization.

columnTypeMappings

ColumnTypeMapping

Type conversions applied during migration.

columnNameMappings

ColumnNameMapping

Column renaming mappings.

primaryKeyColumns

String[]

Primary key columns for the source table. Required for trackModifications under the watermark synchronization strategy.

partitionSize

PartitionSize

Configures the target size of each partition during extraction. Defaults to "auto". See Partition size (partitionSize).

whereClauseCriteria

String

An SQL-like filter to select a subset of rows for migration (for example, "is_deleted = 0").

SourceTargetIdentifier model

A nested object used within TableConfiguration to specify a database object. For source, use only the properties in the following table. For target, you can also set the optional properties in Additional target properties.

Property

Type

Required

Description

databaseName

String

The name of the source or target database.

schemaName

String

The name of the schema containing the table.

tableName

String

The name of the table to be migrated.

Additional target properties

The following optional fields apply only to the target object (not to source).

Property

Type

Required

Description

tableType

String

"native" for a standard Snowflake table (default if omitted) or "iceberg" for an Apache Iceberg™ table.

icebergConfig

Object

For Iceberg targets

Required when tableType is "iceberg". Merged with defaultTableConfiguration.target.icebergConfig if present; table-level keys override defaults. See Iceberg configuration (target.icebergConfig).

Iceberg configuration (target.icebergConfig)

Used when target.tableType is "iceberg". Account setup (external volumes, catalog integrations, stages, and privileges) follows Snowflake’s Iceberg documentation; see Apache Iceberg™ tables, Create an Iceberg table, and Configure an external volume.

Property

Type

Required

Description

catalog

String

Default SNOWFLAKE for Snowflake-managed Iceberg. Use a catalog integration name for externally cataloged tables (for example AWS Glue).

externalVolume

String

For catalog SNOWFLAKE

Snowflake external volume for Iceberg data and metadata.

baseLocationPrefix

String

Optional path prefix for BASE_LOCATION when using Snowflake-managed Iceberg (catalog SNOWFLAKE).

catalogTableName

String

For external catalog

Fully qualified name of the table in the external catalog (for example glue_db.my_table).

catalogSync

String

Optional catalog integration used to sync Snowflake-managed metadata back to an external catalog.

sourceDataStage

String

Stage path starting with @ pointing at existing Parquet files; used for copy_files-style loads with Snowflake-managed Iceberg.

migrationStrategy

String

One of catalog_link, convert_to_managed, or copy_files. When omitted, the Orchestrator infers a strategy from catalog and sourceDataStage.

Partition size (partitionSize)

Controls how large each partition should be during extraction. You can use a string or an object.

Form

Description

"auto" (default)

The system chooses partition sizes from the source platform, extraction strategy, and table size. Auto mode uses larger partitions for Redshift UNLOAD (S3-friendly large files) and smaller partitions for ODBC-based extraction (SQL Server, Redshift regular), where data flows through the Worker. For very large tables (100+ GB), the maximum number of partitions can increase to allow more parallelism.

{ "targetSizeMb": N }

Each partition targets about N megabytes of data.

{ "maxRowsPerPartition": N }

Each partition contains at most N rows, regardless of data size.

When you use the object form, specify only one of targetSizeMb or maxRowsPerPartition.

auto (default):

"partitionSize": "auto"

Target size in MB:

"partitionSize": { "targetSizeMb": 2048 }

Maximum rows per partition:

"partitionSize": { "maxRowsPerPartition": 500000 }

ColumnTypeMapping model

A nested object used within TableConfiguration to specify type mappings for a column.

Property

Type

Required

Description

sourceType

String

The name of the type in the source system.

targetType

String

The name of the target type in Snowflake.

ColumnNameMapping model

A nested object used within TableConfiguration to specify column name mappings.

Property

Type

Required

Description

sourceName

String

The name of the column in the source system.

targetName

String

The name of the target column in Snowflake.

ExtractionStrategy model

Configures the method for data extraction.

Field

Type

Required

Description

strategy

String ("regular", "unload")

The extraction method. "regular" is the default. "unload" is available for Redshift sources only.

externalStage

String

UNLOAD only

The name of the Snowflake external stage to use when strategy is "unload".

Extraction: regular (Default)

"extraction": {
  "strategy": "regular"
}

Extraction: unload (Redshift only)

"extraction": {
  "strategy": "unload",
  "externalStage": "MY_DB.MY_SCHEMA.S3_EXTERNAL_STAGE"
}

SynchronizationStrategy model

Configures the approach for incremental data syncing on subsequent runs.

Field

Type

Required

Description

strategy

String ("none", "checksum", "watermark")

The synchronization method.

watermarkColumn

String

watermark only

Column name to track. Must be monotonically increasing.

trackModifications

Boolean

If true, the system uses the primary key to identify and deduplicate modified rows. Requires primaryKeyColumns to be specified in TableConfiguration.

Strategy: none (Default)

Performs a full extraction of all partition data on every run. No synchronization metadata is stored.

"synchronization": {
  "strategy": "none"
}

Use when data is small, changes are unpredictable, or guaranteed consistency is needed.

Strategy: checksum

Computes a hash of all column values for each partition on the source. Only changed partitions are cleared and re-extracted in the target.

"synchronization": {
  "strategy": "checksum"
}

Use when you need to detect any change in a partition but lack a reliable monotonic column (for example, dimension tables). Note that this requires a checksum computation on the source for every partition on every run.

Strategy: watermark

Tracks a monotonic column (timestamp, ID, or version) to sync only rows where the watermark value is greater than the maximum observed in the previous sync.

"synchronization": {
  "strategy": "watermark",
  "watermarkColumn": "UPDATED_AT"
}

Use when your table has a reliable monotonic column that increases on insert/update (for example, fact tables or event logs).

Note

Watermark alone can’t currently track deletions. Support for this will be added in the future.

Example workflow: Redshift UNLOAD with Iceberg targets

The following workflow excerpt combines Redshift UNLOAD with Iceberg table targets, including Snowflake-managed Iceberg defaults and a per-table external catalog override:

{
  "defaultTableConfiguration": {
    "source": {
      "schemaName": "public",
      "databaseName": "analytics_db"
    },
    "target": {
      "schemaName": "public",
      "databaseName": "TARGET_DB",
      "tableType": "iceberg",
      "icebergConfig": {
        "catalog": "SNOWFLAKE",
        "externalVolume": "my_iceberg_ext_vol",
        "baseLocationPrefix": "migrations/redshift",
        "sourceDataStage": "@TARGET_DB.PUBLIC.ICEBERG_SOURCE_STAGE"
      }
    },
    "extraction": {
      "strategy": "unload",
      "externalStage": "TARGET_DB.PUBLIC.S3_EXTERNAL_STAGE"
    },
    "partitionSize": "auto"
  },
  "tables": [
    {
      "source": { "tableName": "customers" },
      "target": { "tableName": "customers" },
      "columnNamesToPartitionBy": ["customer_id"]
    },
    {
      "source": { "tableName": "events" },
      "target": {
        "tableName": "events",
        "tableType": "iceberg",
        "icebergConfig": {
          "catalog": "my_glue_catalog_integration",
          "externalVolume": "my_iceberg_ext_vol",
          "catalogTableName": "glue_db.events"
        }
      },
      "columnNamesToPartitionBy": ["event_id"]
    }
  ]
}

Affinity

By specifying an affinity for a Workflow, you are indicating that you want specific workers to help with the execution of that Workflow. This can be particularly useful in cases in which you want to have some workers extract from one source and other workers extract from a different source. The rules for matching workers with tasks are:

  • A task without affinity will be picked up by any worker, independently of the worker’s affinity.

  • A worker without affinity will pick up any task, independently of the task’s affinity.

  • A task with a given affinity will not be picked up by a worker with different affinity.

Affinity only needs to be a String; its format is defined by the user. For example, all of these are valid: sql-server, DEV_SERVER, my_custom_server, ::blue::.

Worker configuration

This file configures the behavior and connections for the Workers (data_exchange_agent). You must set selected_task_source to "snowflake_stored_procedure" as shown below.

Section

Property

Type

Description

Top level

selected_task_source

String

Required. Must be "snowflake_stored_procedure".

[application]

max_parallel_tasks

Integer

The maximum number of tasks the Worker will process in parallel using threads.

[application]

task_fetch_interval

Integer

The interval in seconds between attempts to fetch new tasks from the Orchestrator.

[application]

affinity

String

A user-defined affinity for the worker.

[application]

snowflake_database_for_metadata

String

Optional. Database where the Orchestrator created the task metadata objects (default SNOWCONVERT_AI). Must match the Orchestrator’s CUSTOM_SNOWFLAKE_DATABASE_FOR_METADATA if you override it.

[application]

snowflake_schema_for_data_migration_metadata

String

Optional. Schema for data migration task metadata (default DATA_MIGRATION). Must match the Orchestrator’s CUSTOM_SNOWFLAKE_SCHEMA_FOR_DATA_MIGRATION_METADATA if you override it.

[connections.source.*]

N/A

Object

Configuration for source systems. Workers typically require an ODBC driver to connect to the source system.

[connections.target.snowflake_connection_name]

connection_name

String

The name of the connection entry in the ~/.snowflake/config.toml file to use.

An example configuration file looks like this:

selected_task_source = "snowflake_stored_procedure"

[application]
max_parallel_tasks = 4
task_fetch_interval = 30
# Optional: only if the Orchestrator uses CUSTOM_SNOWFLAKE_* overrides for metadata location
# snowflake_database_for_metadata = "SNOWCONVERT_AI"
# snowflake_schema_for_data_migration_metadata = "DATA_MIGRATION"

# SQL Server connection (standard authentication)
[connections.source.sqlserver]
username = "username"
password = "password"
database = "database_name"
host = "127.0.0.1"
port = 1433

# Amazon Redshift connection (IAM authentication for provisioned cluster)
[connections.source.redshift]
username = "demo-user"
database = "snowconvert_demo"
auth_method = "iam-provisioned-cluster"
cluster_id = "migrations-aws"
region = "us-west-2"
access_key_id = "your-access-key-id"
secret_access_key = "your-secret-access-key"

# Amazon Redshift connection (standard authentication)
# [connections.source.redshift]
# username = "myuser"
# password = "mypassword"
# database = "mydatabase"
# host = "my-cluster.abcdef123456.us-west-2.redshift.amazonaws.com"
# port = 5439
# auth_method = "standard"

# Snowflake target connection
[connections.target.snowflake_connection_name]
connection_name = "connection_name"

Note

Only one source connection is needed.

Source connection configuration examples

The following examples show the three main source connection configurations:

1. SQL Server (Standard Authentication)

[connections.source.sqlserver]
username = "username"
password = "password"
database = "database_name"
host = "127.0.0.1"
port = 1433

2. Amazon Redshift (IAM Authentication)

[connections.source.redshift-iam]
username = "demo-user"
database = "demo_db"
auth_method = "iam-provisioned-cluster"
cluster_id = "my-aws-cluster"
region = "us-west-2"
access_key_id = "your-access-key-id"
secret_access_key = "your-secret-access-key"
# Optional fields for UNLOAD strategy
# unload_s3_bucket = "my-migrations-bucket"
# unload_iam_role_arn = "arn:aws:iam::123456789012:role/MyRole"

3. Amazon Redshift (Standard Authentication)

[connections.source.redshift-standard]
username = "myuser"
password = "mypassword"
database = "mydatabase"
host = "my-cluster.abcdef123456.us-west-2.redshift.amazonaws.com"
port = 5439
auth_method = "standard"
# Optional fields for UNLOAD strategy
# unload_s3_bucket = "my-migrations-bucket"
# unload_iam_role_arn = "arn:aws:iam::123456789012:role/MyRole"

Platform-specific Details

Migrate Amazon Redshift data

In order to use the UNLOAD strategy for extraction of Amazon Redshift data, it will be necessary to set up multiple resources. This strategy enables the data to flow directly from Amazon Redshift into an S3 bucket and for Snowflake to execute COPY INTO operations directly from there (by creating an external stage that is mapped to that S3 bucket). This is faster than having the workers download the data and then upload it to a Snowflake stage.

Create a stage integration to S3

If you don’t have an existing stage configured, you need to create a Snowflake external stage that integrates with your S3 bucket. You can create the stage using the following SQL command in Snowflake:

CREATE OR REPLACE STAGE <stage_name>
  URL = 's3://<your_bucket_name>/<path>/'
  STORAGE_INTEGRATION = <your_storage_integration>
  FILE_FORMAT = (TYPE = 'PARQUET');

Alternatively, if you’re using AWS credentials directly:

CREATE OR REPLACE STAGE <stage_name>
  URL = 's3://<your_bucket_name>/<path>/'
  CREDENTIALS = (AWS_KEY_ID = '<your_aws_key_id>' AWS_SECRET_KEY = '<your_aws_secret_key>')
  FILE_FORMAT = (TYPE = 'PARQUET');

Replace the placeholders:

  • <stage_name>: Your desired stage name (for example, my_redshift_stage)

  • <your_bucket_name>: Your S3 bucket name

  • <path>: Optional path within the bucket

  • <your_storage_integration>: Your Snowflake storage integration name (recommended method)

  • <your_aws_key_id> and <your_aws_secret_key>: Your AWS IAM user credentials (if not using storage integration)

Note

Using a Snowflake storage integration is the recommended approach for better security and credential management. For more information about creating storage integrations, see the Snowflake documentation.

Verify stage integration

After setting up your stage, verify that the integration is working correctly before proceeding with data migration. You can verify the stage integration by running the following command in Snowflake:

LIST @<stage_name>;

This command should execute successfully without errors. If the stage is newly created and empty, it may return no results, which is expected.

To perform a more thorough verification, you can test the stage by uploading a test file:

PUT file:///<local_test_file_path> @<stage_name>;
LIST @<stage_name>;

If the commands execute successfully and you can see the uploaded file, your stage integration is configured correctly.

Note

You can also verify stage permissions by checking the stage description:

DESCRIBE STAGE <stage_name>;

This displays the stage configuration, including the URL, credentials type, and file format settings.