DCM Projects for data pipelines

DCM Projects provide a full-lifecycle developer experience which includes capabilities tailored to managing data pipelines.

The pipeline-specific commands don’t apply to all object types. They extend the core commands for the following pipeline use cases:

REFRESH command for dynamic tables

After you deploy a pipeline definition change, you can refresh the dynamic tables inside the pipeline project before testing data quality expectations, so that any new transformation logic is applied end to end.

You can refresh all dynamic tables managed by the DCM project and their required upstream dynamic tables with one command. This command applies only to dynamic tables that are deployed and managed by the referenced project, independent of any definition files. Other object types, such as tasks, are not affected.

See TEST command for data quality expectations for usage examples that combine REFRESH and TEST.

The command runs until all dynamic table refreshes are complete and returns a summary of the row changes or errors for each dynamic table.

To run the REFRESH command:

EXECUTE DCM PROJECT DCM_DEMO.PROJECTS.DCM_PROJECT_STG
  REFRESH ALL;

The JSON output contains the results of the dynamic table refresh operation in the following format:

Property

Description

dts_refresh_result

Contains the results of the dynamic table refresh operation.

refreshed_tables[]

An array of entries, one for each dynamic table that was refreshed.

table_name

Fully qualified name of the dynamic table that was refreshed.

statistics

Refresh statistics for the table.

inserted_rows

Number of rows inserted during the refresh.

deleted_rows

Number of rows deleted during the refresh.

data_timestamp

ISO 8601 timestamp representing the point-in-time freshness of the data after the refresh.

An example of the JSON output for a dynamic table refresh:

{
  "dts_refresh_result": {
    "refreshed_tables": [
      {
        "table_name": "db.schema.my_dynamic_table",
        "statistics": {
          "inserted_rows": 150,
          "deleted_rows": 30
        },
        "data_timestamp": "2026-03-16T12:00:00.000Z"
      }
    ]
  }
}

TEST command for data quality expectations

You can set data quality expectations as quality gates on all stages of your data transformation:

  • Attach expectations to raw data in your bronze layer landing tables to ensure your raw input meets expectations and does not cause errors during transformation.

  • Attach expectations as quality gates to your silver layer to make it easier to debug data issues by having checkpoints at different transformation stages.

  • Attach expectations to your gold layer to ensure the output quality of your data product.

  • Attach expectations from downstream consumers of your data product to your gold layer so you can validate those expectations before deploying breaking changes.

See Data metric function for how to attach expectations in DCM projects.

You can test all data quality expectations attached to tables, dynamic tables, or views that are managed by the DCM project with one command.

Data metric functions that are attached without expectations are not checked.

You can use the CLI commands to set up automated testing as part of your CI/CD workflow. For example, if you have production-like data on a QA, test, or staging environment, you can follow these steps:

  1. PLAN against QA to verify the expected project definition changes.

  2. DEPLOY to QA.

  3. REFRESH ALL dynamic tables on QA to update data based on any new transformation logic and updated definitions, so that expectations are not tested against outdated data.

  4. TEST ALL data quality expectations attached to table objects on the QA environment to verify that the newly deployed logic works as expected and has no negative side effects on the expected shape of your data output.

  5. If all expectations are met on QA, continue with PLAN and DEPLOY to your production environment.

To run the TEST command:

EXECUTE DCM PROJECT DCM_DEMO.PROJECTS.DCM_PROJECT_STG
  TEST ALL;

The TEST output contains the overall status and expectations with their values in the following format:

Important

During the preview phase, the exact output format might change.

{
  "status": <status>,
  "expectations": [
    {
      "table_name": <table_name>,
      "metric_database": <metric_database>,
      "metric_schema": <metric_schema>,
      "metric_name": <metric_name>,
      "expectation_name": <expectation_name>,
      "expectation_expression": <expectation_expression>,
      "value": <value>,
      "expectation_violated": <expectation_violated>,
      "column_names": <column_names>
    }
  ]
}

Property

Description

status

Overall result of the test run. Possible values: SUCCESSFUL (all expectations met), FAILED (one or more expectations violated).

expectations[]

An array of expectation results, one for each data quality expectation evaluated.

table_name

Fully qualified name of the table or view on which the expectation was evaluated.

metric_database

Database that contains the data metric function.

metric_schema

Schema that contains the data metric function.

metric_name

Name of the data metric function (for example, NULL_COUNT, MIN, UNIQUE_COUNT).

expectation_name

Name of the expectation as defined in the project.

expectation_expression

Boolean expression that the metric value is evaluated against (for example, value = 0, value >= 0).

value

The result of the data metric function evaluation. Present only when expectation_violated is false.

expectation_violated

Whether the expectation was violated. true if the metric value did not satisfy the expectation expression; false otherwise.

column_names

An array of column names on which the data metric function was evaluated.

An example of the JSON output for a data quality test:

{
  "status": "FAILED",
  "expectations": [
    {
      "table_name": "db.schema.my_table",
      "metric_database": "SNOWFLAKE",
      "metric_schema": "CORE",
      "metric_name": "NULL_COUNT",
      "expectation_name": "no_nulls_in_id",
      "expectation_expression": "value = 0",
      "value": 0,
      "expectation_violated": false,
      "column_names": ["ID"]
    },
    {
      "table_name": "db.schema.my_table",
      "metric_database": "SNOWFLAKE",
      "metric_schema": "CORE",
      "metric_name": "UNIQUE_COUNT",
      "expectation_name": "unique_id_check",
      "expectation_expression": "value >= 100",
      "value": null,
      "expectation_violated": true,
      "column_names": ["ID"]
    }
  ]
}

PREVIEW command

When you write or alter the SELECT statement of a dynamic table or view, a sample output helps validate the shape of the data. For complex lineage graphs with multiple transformation steps, you can check the output of a downstream view or dynamic table when making changes further upstream.

To validate that the transformation in your code results in the expected data output before deploying, run the PREVIEW command.

The PREVIEW command runs PLAN to compile the current definitions, independent of any deployed state, and then returns a data sample for a specified dynamic table, view, or regular table.

Keep the following requirements and considerations in mind:

  • The PREVIEW command must always reference a fully qualified name of a table object, without Jinja variables.

  • To see sample data in the output, you must ensure that data is already available in the source tables.

  • PREVIEW queries all SELECT statements of referenced dynamic tables and views, but it does not run tasks or CREATE TABLE AS SELECT statements.

To run the PREVIEW command:

EXECUTE DCM PROJECT DCM_DEMO.PROJECTS.DCM_PROJECT_DEV
  PREVIEW
    DCM_PROJECT_DEV.SERVE.V_DASHBOARD_KPI_SUMMARY
  USING CONFIGURATION DEV
FROM
  'snow://workspace/USER$.PUBLIC.DEFAULT$/versions/live/DCM_Project_Quickstart_1'
  LIMIT 100;