Teradata Commands Reference¶
Overview¶
This page provides comprehensive reference documentation for Teradata-specific commands in the Snowflake Data Validation CLI. For SQL Server commands, see SQL Server Commands Reference. For Amazon Redshift commands, see Redshift Commands Reference. For Snowflake-to-Snowflake commands, see Snowflake Commands Reference.
Command Structure¶
All Teradata commands follow this consistent structure:
Where <command> is one of:
run-validation- Run synchronous validationrun-async-validation- Run asynchronous validationgenerate-validation-scripts- Generate validation scriptsget-configuration-files- Get configuration templatesauto-generated-configuration-file- Interactive config generationrow-partitioning-helper- Interactive row partitioning configurationcolumn-partitioning-helper- Interactive column partitioning configuration
Run Synchronous Validation¶
Validates data between Teradata and Snowflake in real-time.
Syntax¶
Options¶
--data-validation-config-file, -dvf (required)
Type: String (path)
Description: Path to YAML configuration file containing validation settings
Example:
--data-validation-config-file ./configs/teradata_validation.yaml
--teradata-host (optional)
Type: String
Description: Teradata server hostname (overrides config file)
Example:
--teradata-host teradata.company.com
--teradata-username (optional)
Type: String
Description: Teradata username (overrides config file)
Example:
--teradata-username my_user
--teradata-password (optional)
Type: String
Description: Teradata password (overrides config file)
Example:
--teradata-password my_password
--teradata-database (optional)
Type: String
Description: Teradata database name (overrides config file)
Example:
--teradata-database prod_db
--snowflake-connection-name (optional)
Type: String
Description: Snowflake connection name
Example:
--snowflake-connection-name prod_connection
--output-directory (optional)
Type: String (path)
Description: Directory for validation results
Example:
--output-directory ./validation_results
--log-level, -ll (optional)
Type: String
Valid Values: DEBUG, INFO, WARNING, ERROR, CRITICAL
Default: INFO
Description: Logging level for validation execution
Example:
--log-level DEBUG
Example Usage¶
Use Cases¶
Real-time validation during Teradata migration
Pre-cutover validation checks
Post-migration verification
Continuous validation in CI/CD pipelines
Testing with temporary credentials
Generate Validation Scripts¶
Generates SQL scripts for Teradata and Snowflake metadata extraction.
Syntax¶
Positional Arguments¶
config_file (required)
Type: String (path)
Description: Path to YAML configuration file
Example:
./configs/validation.yaml
Options¶
--teradata-host (optional)
Type: String
Description: Teradata server hostname (overrides config file)
Example:
--teradata-host teradata.company.com
--teradata-username (optional)
Type: String
Description: Teradata username (overrides config file)
Example:
--teradata-username script_generator
--teradata-password (optional)
Type: String
Description: Teradata password (overrides config file)
Example:
--teradata-password secure_password
--teradata-database (optional)
Type: String
Description: Teradata database name (overrides config file)
Example:
--teradata-database analytics_db
--output-directory (optional)
Type: String (path)
Description: Directory for generated scripts
Example:
--output-directory ./generated_scripts
Example Usage¶
Output¶
The command generates SQL scripts in the specified output directory:
Use Cases¶
Generating scripts for execution by DBAs
Compliance requirements for query review
Environments where direct CLI database access is restricted
Manual execution and validation workflows
Separating metadata extraction from validation
Run Asynchronous Validation¶
Performs validation using pre-generated metadata files without connecting to databases.
Syntax¶
Positional Arguments¶
config_file (required)
Type: String (path)
Description: Path to YAML configuration file
Example:
./configs/async_validation.yaml
Options¶
--output-directory (optional)
Type: String (path)
Description: Directory containing metadata files generated from scripts
Example:
--output-directory ./metadata_files
Example Usage¶
Prerequisites¶
Before running async validation:
Generate validation scripts using
generate-validation-scriptsExecute the generated scripts on Teradata and Snowflake databases
Save results to CSV/metadata files
Ensure metadata files are available in the configured paths
Use Cases¶
Validating in environments with restricted database access
Separating metadata extraction from validation
Batch validation workflows
Scheduled validation jobs
When database connections are intermittent
Get Configuration Templates¶
Retrieves Teradata configuration templates.
Syntax¶
Options¶
--templates-directory, -td (optional)
Type: String (path)
Default: Current directory
Description: Directory to save template files
Example:
--templates-directory ./templates
--query-templates (optional)
Type: Flag (no value required)
Description: Include J2 (Jinja2) query template files for advanced customization
Example:
--query-templates
Example Usage¶
Output Files¶
Without --query-templates flag:
With --query-templates flag:
Use Cases¶
Starting a new Teradata validation project
Learning Teradata-specific configuration options
Customizing validation queries for Teradata
Creating organization-specific templates
Auto-Generate Configuration File¶
Interactive command for Teradata configuration generation.
Syntax¶
Options¶
This command has no command-line options. All input is provided through interactive prompts.
Interactive Prompts¶
The command will prompt for the following information:
Teradata host
Hostname or IP address of Teradata server
Example:
teradata.company.com
Teradata username
Authentication username
Example:
migration_user
Teradata password
Authentication password (hidden input)
Not displayed on screen for security
Teradata database
Name of the database to validate
Example:
production_db
Output directory path
Where to save validation results
Example:
./validation_results
Example Session¶
Generated Configuration¶
The command generates a basic YAML configuration file:
Next Steps After Generation¶
Edit the configuration file to add:
Target connection details (if not using default)
Tables to validate
Validation options
Column selections and mappings
Add table configurations:
Specify fully qualified table names
Configure column selections
Set up filtering where clauses
Review Teradata-specific settings:
Verify target_database is correctly set
Check schema mappings if needed
Test the configuration:
Use Cases¶
Quick setup for new Teradata users
Generating baseline configurations
Testing connectivity during setup
Creating template configurations for teams
Row Partitioning Helper¶
Interactive command to generate partitioned table configurations for large tables. This helper divides tables into smaller row partitions based on a specified column, enabling more efficient validation of large datasets.
Syntax¶
Options¶
This command has no command-line options. All input is provided through interactive prompts.
How It Works¶
The table partitioning helper:
Reads an existing configuration file with table definitions
For each table, prompts whether to apply partitioning
If partitioning is enabled, collects partition parameters
Queries the source Teradata database to determine partition boundaries
Generates new table configurations with
WHEREclauses for each partitionSaves the partitioned configuration to a new file
Interactive Prompts¶
The command will prompt for the following information:
Configuration file path
Path to existing YAML configuration file
Example:
./configs/teradata_validation.yaml
For each table in the configuration:
a. Apply partitioning? (yes/no)
Whether to partition this specific table
Default: yes
b. Partition column (if partitioning)
Column name used to divide the table
Should be indexed for performance
Example:
transaction_id,created_date
c. Is partition column a string type? (yes/no)
Determines quoting in generated WHERE clauses
Default: no (numeric)
d. Number of partitions
How many partitions to create
Example:
10,50,100
Example Session¶
Generated Output¶
The command generates partitioned table configurations with WHERE clauses:
Use Cases¶
Large table validation: Break multi-billion row tables into manageable chunks
Parallel processing: Enable concurrent validation of different partitions
Memory optimization: Reduce memory footprint by processing smaller data segments
Incremental validation: Validate specific data ranges independently
Performance tuning: Optimize validation for tables with uneven data distribution
Best Practices¶
Choose appropriate partition columns:
Use indexed columns for better query performance
Prefer columns with sequential values (IDs, timestamps)
Avoid columns with highly skewed distributions
Determine optimal partition count:
Consider table size and available resources
Start with 10-20 partitions for tables with 10M+ rows
Increase partitions for very large tables (100M+ rows)
String vs numeric columns:
Numeric columns are generally more efficient
String columns work but may have uneven distribution
After partitioning:
Review generated WHERE clauses
Adjust partition boundaries if needed
Test with a subset before full validation
Column Partitioning Helper¶
Interactive command to generate partitioned table configurations for wide tables with many columns. This helper divides tables into smaller column partitions, enabling more efficient validation of tables with a large number of columns.
Syntax¶
Options¶
This command has no command-line options. All input is provided through interactive prompts.
How It Works¶
The column partitioning helper:
Reads an existing configuration file with table definitions
For each table, prompts whether to apply column partitioning
If partitioning is enabled, collects the number of partitions
Queries the source Teradata database to retrieve all column names for the table
Divides the columns into the specified number of partitions
Generates new table configurations where each partition validates only a subset of columns
Saves the partitioned configuration to a new file
Interactive Prompts¶
The command will prompt for the following information:
Configuration file path
Path to existing YAML configuration file
Example:
./configs/teradata_validation.yaml
For each table in the configuration:
a. Apply column partitioning? (yes/no)
Whether to partition this specific table by columns
Default: yes
b. Number of partitions (if partitioning)
How many column partitions to create
Example:
3,5,10
Example Session¶
Generated Output¶
The command generates partitioned table configurations with column subsets:
Use Cases¶
Wide table validation: Break tables with hundreds of columns into manageable chunks
Memory optimization: Reduce memory footprint by validating fewer columns at a time
Parallel processing: Enable concurrent validation of different column groups
Targeted validation: Validate specific column groups independently
Performance tuning: Optimize validation for tables with many LOB or complex columns
Best Practices¶
Determine optimal partition count:
Consider the total number of columns in the table
For tables with 50+ columns, start with 3-5 partitions
For tables with 100+ columns, consider 5-10 partitions
Column ordering:
Columns are divided alphabetically
Related columns may end up in different partitions
After partitioning:
Review generated column lists
Verify all required columns are included
Test with a subset before full validation
Combine with row partitioning:
For very large, wide tables, consider using both row and column partitioning
First partition by columns, then apply row partitioning to each column partition if needed
Teradata Connection Configuration¶
Teradata connections require specific configuration in the YAML file.
Connection Example¶
Connection Fields¶
mode (required)
Type: String
Valid Values:
credentialsDescription: Connection mode for Teradata
host (required)
Type: String
Description: Teradata hostname or IP address
Examples:
"teradata.company.com""td-prod.internal.company.net""192.168.1.50"
username (required)
Type: String
Description: Teradata authentication username
Example:
"migration_admin"
password (required)
Type: String
Description: Teradata authentication password
Security Note: Consider using environment variables
database (required)
Type: String
Description: Teradata database name
Example:
"production_database"
Teradata-Specific Global Configuration¶
target_database (required for Teradata)
Type: String
Description: Target database name in Snowflake for Teradata validations
Example:
target_database: PROD_DBNote: This is required in the global configuration section, not the connection section
Connection Examples¶
Production Connection:
Development Connection:
Multi-Database Setup:
Complete Teradata Examples¶
Example 1: Basic Teradata Configuration¶
Example 2: Teradata Large-Scale Migration¶
Example 3: Teradata Multi-Schema Validation¶
Example 4: Teradata View Validation¶
Validate Teradata views alongside tables for comprehensive migration verification.
Note: View validation creates temporary tables internally to materialize view data for comparison between Teradata and Snowflake.
Troubleshooting Teradata Connections¶
Issue: Connection Timeout¶
Symptom:
Solutions:
Verify the host and network connectivity:
Check firewall rules allow Teradata connections
Verify Teradata server is running
Test connection with Teradata SQL Assistant or other client tools
Issue: Authentication Failed¶
Symptom:
Solutions:
Verify credentials are correct
Check user has necessary permissions:
Verify user account is not locked
Check password hasn’t expired
Issue: Database Not Found¶
Symptom:
Solutions:
Verify database name is correct (case-sensitive)
Check user has access to the database:
Ensure database exists and is accessible
Issue: Target Database Configuration Missing¶
Symptom:
Solution:
Add target_database to global configuration:
Issue: Schema Mapping Errors¶
Symptom:
Solution:
Add schema mappings in configuration:
Best Practices for Teradata¶
Configuration¶
Always specify target_database:
Use schema mappings:
Handle case sensitivity:
Security¶
Use environment variables for passwords:
Use read-only accounts:
Restrict column access for sensitive data:
Performance¶
Enable chunking for large tables:
Use WHERE clauses to filter data:
Optimize thread count:
Exclude unnecessary metrics for very large tables:
Data Quality¶
Start with schema validation:
Progress to metrics validation:
Enable row validation for critical tables: