Snowflake Data Clean Rooms: Create a custom analysis template with a user interface

This topic describes how a provider can programmatically create and share a clean room with a consumer while providing a user interface that the consumer can use to execute an analysis in the web app. It is similar to the Machine Learning flow, but demonstrates how to provide the consumer with a user interface to an otherwise complex machine learning workflow.

This example generates a clean room with a lookalike modeling template. Lookalike modeling is a type of analysis where a consumer tries to find “high-value” customers from a provider’s data by training a statistical model on their high-value customers. This model uses consumer-specified flags to indicate high-value users, such as those with expenditures above a certain threshold, in the consumer’s dataset. The trained model is then used to infer which customers in the provider’s data could potentially be “high value” to the consumer.

You can apply multiple custom templates to a clean room, each of which will appear in the web app for the consumer.

This is the general flow for creating and sharing a clean room with a UI:

  • Provider:

    1. Set up your coding environment, including specifying the proper role and warehouse to use.

    2. Create the clean room and enable any cross-region sharing needed for the clean room.

    3. Add your data to the clean rooms, and set join policies on the table.

    4. Add any custom scripting you need.

    5. Set the column policies on the template.

    6. Create and apply the UI for your template to the web app.

    7. Share your clean room with consumers.

  • Consumer:

    1. Use the web app to install the clean room and run analyses on it.

Prerequisites

You need two separate Snowflake accounts to complete this flow. Use the first account to execute the provider’s commands, then switch to the second account to execute the consumer’s commands.

Provider

Use a Snowflake worksheet in the provider account to execute the commands in this section.

Set up the environment

Before using the Developer APIs, you need to assume the SAMOOHA_APP_ROLE role and specify the warehouse used to execute the APIs. If you don’t have the SAMOOHA_APP_ROLE role, contact your account administrator.

Execute the following commands to set up your environment:

USE ROLE samooha_app_role;
USE WAREHOUSE app_wh;
Copy

Create the clean room

Before creating a clean room, you need to specify an alphanumeric name for the clean room. The name cannot contain special characters except spaces and underscores. Make sure that the name of the clean room does not duplicate an existing clean room.

To specify the name of the clean room, execute the following command:

SET cleanroom_name = 'UI Registration ML Clean room';
Copy

You are now ready to execute the provider.cleanroom_init command to create the clean room. The second argument defines the distribution of the clean room. For this example, you’ll set this to INTERNAL to bypass the automated security scan that takes place before the clean room can be released to collaborators. The provider and consumer accounts must be in the same organization to use the INTERNAL distribution method. For clean rooms being distributed to an account in a different organization, you must use an EXTERNAL distribution.

To create the clean room, execute the following command:

CALL samooha_by_snowflake_local_db.provider.cleanroom_init($cleanroom_name, 'INTERNAL');
Copy

After you create a clean room, you must set its release directive before it can be shared with a collaborator.

Note

When you set the distribution of a clean room to EXTERNAL, you must wait for the security scan to complete before setting the release directive. You can view the status of the security scan by executing:

CALL samooha_by_snowflake_local_db.provider.view_cleanroom_scan_status($cleanroom_name);
Copy

You can continue to execute commands during the security scan as long as you set the release directive before trying to execute the provider.create_or_update_cleanroom_listing command.

To set the release directive for the clean room, execute the following command:

CALL samooha_by_snowflake_local_db.provider.set_default_release_directive($cleanroom_name, 'V1_0', '0');
Copy

Cross-region sharing

In order to share a clean room with a Snowflake customer whose account is in a different region than your account, you must enable Cross-Cloud Auto-Fulfillment. For information about the additional costs associated with collaborating with consumers in other regions, see Cross-Cloud Auto-Fulfillment costs.

When using developer APIs, enabling cross-region sharing is a two-step process:

  1. A Snowflake administrator with the ACCOUNTADMIN role enables Cross-Cloud Auto-Fulfillment for your Snowflake account. For instructions, see Collaborate with accounts in different regions.

  2. You execute the provider.enable_laf_for_cleanroom command to enable Cross-Cloud Auto-Fulfillment for the clean room. For example:

    CALL samooha_by_snowflake_local_db.provider.enable_laf_for_cleanroom($cleanroom_name);
    
    Copy

After you have enabled Cross-Cloud Auto-Fulfillment for the clean room, you can add consumers to your listing as usual using the provider.create_or_update_cleanroom_listing command. The listing is automatically replicated to remote clouds and regions as needed.

Add your data and set join policies

Link Snowflake tables into the clean room, browse through the list of tables in your Snowflake account and enter the fully qualified table names (Database.Schema.Table) as an array. The procedure automatically makes the table accessible to the clean room by creating a secure view of the table from within the clean room, thereby avoiding any need to make a copy of your table.

CALL samooha_by_snowflake_local_db.provider.link_datasets($cleanroom_name, ['samooha_provider_sample_database.lookalike_modeling.customers']);
Copy

Note

If this step doesn’t work even though your table exists, it is likely the SAMOOHA_APP_ROLE role has not yet been given access to it. If so, switch to the ACCOUNTADMIN role, call the following procedure on the database, and then switch back for the rest of the flow:

USE ROLE accountadmin;
CALL samooha_by_snowflake_local_db.provider.register_db('<DATABASE_NAME>');
USE ROLE samooha_app_role;
Copy

You can view the dataset names linked to the clean room by calling the following procedure.

CALL samooha_by_snowflake_local_db.provider.view_provider_datasets($cleanroom_name);
Copy

You can see the datasets linked to the clean room using the following query:

SELECT * FROM <PROVIDER_TABLE> LIMIT 10;
Copy

Specify which columns the consumer is allowed to join on when running templates within the clean room. This procedure should be called on identity columns like email. The join policy is “replace only”, so if the function is called again, then the previously set join policy is completely replaced by the new one.

CALL samooha_by_snowflake_local_db.provider.set_join_policy($cleanroom_name, ['samooha_provider_sample_database.lookalike_modeling.customers:hashed_email']);
Copy

If you want to view all the columns to decide the join policy columns, call the following procedure.

CALL samooha_by_snowflake_local_db.provider.view_join_policy($cleanroom_name);
Copy

Add your custom code

Add confidential Machine Learning Python code to the clean room

Load some python functions into the clean room for the lookalike ML work. All python functions installed in the clean room remain completely confidential. They cannot be seen by the consumer.

The following API allows you to define your Python functions directly as inline functions into the clean room. Alternatively you can load Python from staged files you’ve uploaded into the clean room stage. See the API reference guide for an example.

Note

Note that this implementation is limited by the total Snowflake size constraint on the amount of data that can be aggregated by ARRAY_AGG (i.e. 16MB). An implementation leveraging batching and streaming models that can scale to arbitrary-sized datasets using batching is available upon request.

CALL samooha_by_snowflake_local_db.provider.load_python_into_cleanroom(
    $cleanroom_name,
    'lookalike_train',
    ['input_data variant', 'labels variant'],
    ['pandas', 'numpy', 'xgboost'],
    'variant',
    'train',
$$
import numpy as np
import pandas as pd
import xgboost
from sklearn import preprocessing
import sys
import os
import pickle
import codecs
import threading


class TrainXGBoostClassifier(object):
    def __init__(self):
        self.model = None
        self._params = {
            "objective": "binary:logistic",
            "max_depth": 3,
            "nthread": 1,
            "eval_metric": "auc",
        }
        self.num_boosting_rounds = 10

    def get_params(self):
        if self.model is not None and "updater" not in self._params:
            self._params.update(
                {"process_type": "update", "updater": "refresh", "refresh_leaf": True}
            )
        return self._params

    def train(self, X, y):
        # Train the model in a threadsafe way

        # pick only the categorical attributes
        categorical = X.select_dtypes(include=[object])

        # fit a one-hot-encoder to convert categorical features to binary features (required by XGBoost)
        ohe = preprocessing.OneHotEncoder()
        categorical_ohe = ohe.fit_transform(categorical)
        self.ohe = ohe

        # get the rest of the features and add them to the binary features
        non_categorical = X.select_dtypes(exclude=[object])
        train_x = np.concatenate((categorical_ohe.toarray(), non_categorical.to_numpy()), axis=1)

        xg_train = xgboost.DMatrix(train_x, label=y)

        params = self.get_params()
        params["eval_metric"] = "auc"
        evallist = [(xg_train, "train")]
        evals_result = {}

        self.model = xgboost.train(
            params, xg_train, self.num_boosting_rounds, evallist, evals_result=evals_result
        )

        self.evals_result = evals_result

    def __dump_model(self, model):
        # Save down the model as a json string to load up for scoring/inference

        pickle_jar = codecs.encode(pickle.dumps([model, self.ohe]), "base64").decode()
        return pickle_jar

    def dump_model(self):
        # Save down the model as a json string to load up for scoring/inference
        if self.model is not None:
            return self.__dump_model(self.model)
        else:
            raise ValueError("Model needs to be trained first")


def train(d1, l1):
    # get take training features and put them in a pandas dataframe
    X = pd.DataFrame(d1)

    # get the labels into a Numpy array
    y = np.array(l1)

    trainer = TrainXGBoostClassifier()
    trainer.train(X, y)

    # return training stats, accuracy, and the pickled model and pickled one-hot-encoder
    return {
        "total_rows": len(d1),
        "total_bytes_in": sys.getsizeof(d1),
        "model": trainer.dump_model(),
        "iteration": trainer.num_boosting_rounds,
        "auc": np.max(trainer.evals_result["train"]["auc"]),
        "error": 1 - np.max(trainer.evals_result["train"]["auc"])
    }   
$$);
Copy

Now install a scoring function into the clean room:

CALL samooha_by_snowflake_local_db.provider.load_python_into_cleanroom(
    $cleanroom_name,
    'lookalike_score',
    ['pickle_jar variant', 'emails variant', 'features variant'],
    ['pandas', 'numpy', 'xgboost', 'scikit-learn'],
    'string',
    'score',
$$
import numpy as np
import pandas as pd
import xgboost as xgb
import pickle
import codecs
import json


def score(model, emails, features):
    # load model
    model = model[0] if not isinstance(model, str) else model
    model = pickle.loads(codecs.decode(model.encode(), "base64"))

    # retrieve the XGBoost trainer from the pickle jar
    bst = model[0]

    # retrieve the fitted one-hot-encoder from the pickle jar
    ohe2 = model[1]

    # create pandas dataframe from the inference features
    Y = pd.DataFrame(features)

    # select the categorical attributes and one-hot-encode them
    Y1 = Y.select_dtypes(include=[object])
    Y2 = ohe2.transform(Y1)

    # select the non-categorical attributes
    Y3 = Y.select_dtypes(exclude=[object])

    # join the results of the one-hot encoding to the rest of the attributes
    Y_pred = np.concatenate((Y2.toarray(), Y3.to_numpy()), axis=1)

    # inference
    dscore = xgb.DMatrix(Y_pred)
    pred = bst.predict(dscore)

    retval = list(zip(np.array(emails), list(map(str, pred))))
    retval = [{"email": r[0], "score": r[1]} for r in retval]
    return json.dumps(retval)  
$$);
Copy

Note

Loading Python into the clean room creates a new patch for the clean room. If your clean room distribution is set to EXTERNAL, you need to wait for the security scan to complete, then update the default release directive using:

-- See the versions available inside the cleanroom
SHOW VERSIONS IN APPLICATION PACKAGE samooha_cleanroom_UI_Registration_ML_clean_room;

-- Once the security scan is approved, update the release directive to the latest version
CALL samooha_by_snowflake_local_db.provider.set_default_release_directive($cleanroom_name, 'V1_0', '2');
Copy

Add a Custom Lookalike Modeling template

To add a custom analysis template to the clean room you need a placeholder for table names on both the provider and consumer sides, along with join columns from the provider side. In SQL Jinja templates, these placeholders must always be:

  • source_table: an array of table names from the provider

  • my_table: an array of table names from the consumer

Table names can be made dynamic through using these variables, but they can also be hardcoded into the template if desired using the name of the view linked to the clean room. Column names can either be hardcoded into the template, if desired, or set dynamically through parameters. If they are set through parameters, remember that you need to call the parameters dimensions or measure_column, which need to be arrays, in order for them to be checked against the column policy. You add these as SQL Jinja parameters in the template that will be passed in later by the consumer when querying. The join policies ensure that the consumer cannot join on columns other than the authorized ones.

Alternatively, any argument in a custom SQL Jinja template can be checked for compliance with the join and column policies using the following filters:

  • join_policy: checks if a string value or filter clause is compliant with the join policy

  • column_policy: checks if a string value or filter clause is compliant with the column policy

  • join_and_column_policy: checks if columns used for a join in a filter clause are compliant with the join policy, and that columns used as a filter are compliant with the column policy

For example, in the clause {{ provider_id | sqlsafe | join_policy }}, an input of p.HEM will be parsed to check if p.HEM is in the join policy. Note: Only use the sqlsafe filter with caution as it allows collaborators to put pure SQL into the template.

Note

All provider/consumer tables must be referenced using these arguments since the name of the secure view actually linked to the cleanroom will be different to the table name. Critically, provider table aliases MUST be p (or p1), p2, p3, p4, etc. and consumer table aliases must be c (or c1), c2, c3, etc. This is required in order to enforce security policies in the cleanroom.

Note that this function overrides any existing template with the same name. If you want to update any existing template, you can simply call this function again with the updated template.

A set of features is selected from the provider dataset, and a set of labels is selected from the consumer dataset, along with a “high value” flag (called label_value). These 2 tables are then inner-joined on email and passed to the Random Forest training algorithm. Lastly, the output of the model training step is passed to an inference function, which uses the trained model to “infer” which of the provider customers NOT in the consumer datasets could be “high value”. The count of such individuals is then returned, along with the model error.

The threshold for determining the score beyond which a customer is “likely high value” is manually set in the template as 0.5. This can be easily changed when adding the template to the clean room.

CALL samooha_by_snowflake_local_db.provider.add_custom_sql_template($cleanroom_name, 'prod_custom_lookalike_template',
$$
WITH
features AS (
    SELECT
        identifier({{ provider_join_col | join_policy }}) AS joincol,
        array_construct({{ dimensions[0] | sqlsafe }} {% for feat in dimensions[1:] %} , {{ feat | sqlsafe }} {% endfor %}) AS features
    FROM
        identifier({{ source_table[0] }}) AS p
),
labels AS (
    SELECT
        c.{{ consumer_join_col | sqlsafe }} AS joincol,
        c.{{ filter_column | default('SALES_DLR') | sqlsafe }} {{ operator | default('>=') | sqlsafe }} {{ filter_value | default(2000) | sqlsafe }} AS label_value
    FROM
        identifier({{ my_table[0] }}) AS c
),
trained_model AS (
    SELECT
        train_out:model::varchar AS model,
        train_out:error::float AS error
    FROM (
      SELECT
        cleanroom.{{ lookalike_train_function | default('lookalike_train') | sqlsafe }}(array_agg(f.features), array_agg(l.label_value)) AS train_out
      FROM features f, labels l
      WHERE f.joincol = l.joincol
    )
),
inference_output AS (
    SELECT
        MOD(seq4(), 100) AS batch,
        cleanroom.{{ lookalike_score_function | default('lookalike_score') | sqlsafe }}(
            array_agg(distinct t.model), 
            array_agg(identifier({{ provider_join_col | join_policy }})), 
            array_agg(array_construct( identifier({{ dimensions[0] }}) {% for feat in dimensions[1:] %} , identifier({{ feat }}) {% endfor %}) )
        ) AS scores
    FROM trained_model t, identifier({{ source_table[0] }}) p
    WHERE identifier({{ provider_join_col | join_policy }}) NOT IN (SELECT c.{{ consumer_join_col | sqlsafe }} FROM identifier({{ my_table[0] }}) c)
    GROUP BY batch
),
processed_output AS (
    SELECT value:email::string as id, value:score::float AS score FROM (select scores from inference_output), lateral flatten(input => parse_json(scores))
), train_results AS (
    SELECT {{ num_boosting_rounds | sqlsafe }} AS num_boosting_rounds, {{ trim_extremes | sqlsafe }} AS trim_extremes, p.audience_size AS audience_size, t.error AS error FROM (SELECT count(distinct id) AS audience_size FROM processed_output WHERE score > 0.5) p, trained_model t
), seed_size AS (
    select count(*) AS seed_audience_size FROM features f, labels l WHERE f.joincol = l.joincol
)
SELECT s.seed_audience_size, t.audience_size AS num_lookalikes_found, t.error FROM train_results t, seed_size s
$$);

-- Last parameter (Differential Privacy) is optional and omitted here. Default is 1
Copy

If you want to view the templates that are currently active in the clean room, call the following procedure. You can make the modifications to enable Differential Privacy guarantees on your analysis. A similar pattern can be incorporated into any custom template that you choose to write.

CALL samooha_by_snowflake_local_db.provider.view_added_templates($cleanroom_name);
Copy

Set the column policies on each template

Once you’ve added your custom template, you can apply column policies to all table columns defined by that template.

To see the top 10 rows of the table available to your consumers, run the following query:

SELECT * FROM <PROVIDER_TABLE> LIMIT 10;
Copy

Set the columns on which you want to group, aggregate (e.g. SUM/AVG) and generally use in an analysis for every table and template combination. This gives flexibility so that the same table can allow different column selections depending on the underlying template. This should be called only after adding the template.

Note that set_column_policy is not additive; if the function is called again, then the previous column policy is completely replaced by the new one.

Column policy should not be used on identity columns like email, HEM, RampID, etc. since you don’t want the consumer to be able to group by these columns. In the production environment, the system will intelligently infer PII columns and block this operation, but this feature is not available in the sandbox environment. It should only be used on columns that you want the consumer to be able to aggregate and group by, like Status, Age Band, Region Code, Days Active, etc.

Note that for the “column_policy” and “join_policy” to carry out checks on the consumer analysis requests, all column names MUST be referred to as dimensions or measure_columns in the SQL Jinja template. Make sure you use these tags to refer to columns you want to be checked in custom SQL Jinja templates.

CALL samooha_by_snowflake_local_db.provider.set_column_policy($cleanroom_name, [
    'prod_custom_lookalike_template:samooha_provider_sample_database.lookalike_modeling.customers:status', 
    'prod_custom_lookalike_template:samooha_provider_sample_database.lookalike_modeling.customers:age', 
    'prod_custom_lookalike_template:samooha_provider_sample_database.lookalike_modeling.customers:region_code', 
    'prod_custom_lookalike_template:samooha_provider_sample_database.lookalike_modeling.customers:days_active', 
    'prod_custom_lookalike_template:samooha_provider_sample_database.lookalike_modeling.customers:income_bracket'
]);
Copy

If you want to view the column policy that has been added to the clean room, call the following procedure.

CALL samooha_by_snowflake_local_db.provider.view_column_policy($cleanroom_name);
Copy

Create and apply your template UI

Next you will define the UI for your template so that consumers can use it in the browser:

  1. Call the provider.add_ui_form_customizations command to define how the template appears in the web app.

  2. Sign in to the web as the provider.

Define the UI for the template

The provider.add_ui_form_customizations command defines how the template in the clean room appears in the web app. For a complete description of the customizations that you can use to design the user interface, see the Clean room UI update commands section of the Provider API Reference.

Note

You can optionally use the warehouse_hints customization to control which kind of warehouse the consumer account will use to run analyses. For example, for most machine learning use cases, Snowflake recommends specifying a Snowpark-optimized warehouse. For valid values for warehouse_size, see CREATE WAREHOUSE.

For this example, execute the provider.add_ui_form_customizations command with the following customizations:

CALL samooha_by_snowflake_local_db.provider.add_ui_form_customizations(
    $cleanroom_name,
    'prod_custom_lookalike_template',
    {
        'display_name': 'Custom Lookalike Template',
        'description': 'Use our customized ML techniques to find lookalike audiences.',
        'methodology': 'Specify your own seed audience, while matching against our users. Then customize the lookalike model across number of boosting rounds and removing outliers.',
        'warehouse_hints': {
            'warehouse_size': 'medium',
            'snowpark_optimized': TRUE
        }
    },
    {
    'num_boosting_rounds': {
        'display_name': 'Number of Boosting Rounds',
        'type': 'integer',
        'default': 10,
        'order': 7,
        'description': 'How many rounds of boosting should the model do?',
        'size': 'M',
        'group': 'Training & Inference Configuration'
    },
    'trim_extremes': {
        'display_name': 'Trim Extremes',
        'type': 'boolean',
        'default': False,
        'order': 8,
        'description': 'Remove outliers by default?',
        'size': 'M',
        'group': 'Training & Inference Configuration'
    },
    'lookalike_train_function': {
        'display_name': 'Training Function',
        'choices': ['lookalike_train'],
        'type': 'dropdown',
        'order': 9,
        'description': 'Which function do you want to use for training?',
        'size': 'M',
        'group': 'Training & Inference Configuration'
    },
    'lookalike_score_function': {
        'display_name': 'Scoring Function',
        'choices': ['lookalike_score'],
        'type': 'dropdown',
        'order': 10,
        'description': 'Which function do you want to use for scoring?',
        'size': 'M',
        'group': 'Training & Inference Configuration'
    },    
    'provider_join_col': {
        'display_name': 'Provider Join Column',
        'choices': ['p.HASHED_EMAIL'],
        'type': 'dropdown',
        'order': 4,
        'description': 'Select the provider column to join users on.',
        'infoMessage': 'We recommend using HASHED_EMAIL.',
        'size': 'M',
        'group': 'Enable Provider Features'
    },
    'consumer_join_col': {
        'display_name': 'Consumer Join Column',
        'description': 'Specify column in your table that matches the providers (i.e. HASHED_EMAIL).',
        'size': 'M',
        'default': 'HASHED_EMAIL',
        'infoMessage': 'We recommend using HASHED_EMAIL.',
        'order': 5,
        'group': 'Enable Provider Features'
    },
    'dimensions': {
        'display_name': 'Feature Selection',
        'choices': ['p.STATUS', 'p.AGE', 'p.REGION_CODE', 'p.DAYS_ACTIVE'],
        'type': 'multiselect',
        'order': 6,
        'description': 'What features do you want to train on?',
        'infoMessage': 'We recommend selecting all features for maximum signal.',
        'size': 'M',
        'group': 'Enable Provider Features'
    },
    'filter_column': {
        'display_name': 'Filter Column',
        'type': 'any',
        'order': 1,
        'description': 'Specify column in your table to filter for high value users (i.e. SALES_DLR)',
        'size': 'S',
        'default': 'SALES_DLR',
        'infoMessage': 'We recommend you input SALES_DLR over here.',
        'group': 'Seed Audience Selection'
    },
    'operator': {
        'display_name': 'Operator',
        'choices': ['>=', '=', '<='],
        'type': 'dropdown',
        'order': 2,
        'description': 'What is the operator your want to use for the filter?',
        'size': 'S',
        'group': 'Seed Audience Selection'
    },
    'filter_value': {
        'display_name': 'Filter Value',
        'order': 3,
        'description': 'What value do you want to filter to?',
        'default': 2000,        
        'size': 'S',
        'group': 'Seed Audience Selection'
    }
}, {
    'measure_columns': ['seed_audience_size', 'audience_size', 'num_lookalikes_found', 'error'],
    'default_output_type': 'BAR'
});
Copy

Share with a consumer

Finally, add a data consumer to the clean room by adding their Snowflake account locator and account names as shown below. The Snowflake account name must be of the form <ORGANIZATION>.<ACCOUNT_NAME>.

Note

In order to call the following procedures, make sure you have first set the release directive using provider.set_default_release_directive. You can see the latest available version and patches using:

SHOW VERSIONS IN APPLICATION PACKAGE samooha_cleanroom_UI_Registration_ML_clean_room;
Copy
CALL samooha_by_snowflake_local_db.provider.add_consumers($cleanroom_name, '<CONSUMER_ACCOUNT_LOCATOR>', '<CONSUMER_ACCOUNT_NAME>');
CALL samooha_by_snowflake_local_db.provider.create_or_update_cleanroom_listing($cleanroom_name);
Copy

If you want to enable provider-run analysis, call enable_provider_run_analysis after calling add_consumers and before calling create_or_update_cleanroom_listing.

Important

Whenever you change important information in a clean room such as the UI, policies, or datasets, be sure to call provider.create_or_update_cleanroom_listing to update the clean room in production. Changes can take up to 10 minutes to appear in the UI, for both the producer and the consumer.

Helper methods

Here are some useful helper functions for creating a clean room.

Multiple consumer account locators can be passed into the provider.add_consumers function as a comma separated string, or as separate calls to provider.add_consumers.

If you want to view the consumers who have been added to this clean room, call the following procedure.

CALL samooha_by_snowflake_local_db.provider.view_consumers($cleanroom_name);
Copy

If you want to view the clean rooms that have been created recently, use the following procedure.

CALL samooha_by_snowflake_local_db.provider.view_cleanrooms();
Copy

If you want to get more insights about the clean room that you have created, use the following procedure.

CALL samooha_by_snowflake_local_db.provider.describe_cleanroom($cleanroom_name);
Copy

Any clean room created can also be deleted. The following command drops the clean room entirely, so any consumers who previously had access to the clean room will no longer be able to use it. If a clean room with the same name is desired in the future, it must be re-initialized using the above flow.

CALL samooha_by_snowflake_local_db.provider.drop_cleanroom($cleanroom_name);
Copy

Consumer Flow

The clean room you just created as the provider is now available in the web app in the consumer’s clean room environment.

Note

You can continue to modify the UI form in the provider account, and then see the changes in the consumer’s clean room environment in real time.

Access the consumer account on the Web App

To sign in to the clean room as the consumer:

  1. Sign in to the web app of the Snowflake Data Clean Room. For details, see Snowflake Data Clean Room: Web app.

  2. Enter your email address, and select Continue.

  3. Enter your password.

  4. Select the Snowflake account associated with the consumer account.

Install the clean room

On the Clean Rooms page, find the tile that corresponds to the clean room you just created, and select Install.

Run an analysis

To run an analysis in the clean room:

  1. On the Clean Rooms page, find the tile for the new clean room and select Run.

  2. Select the analysis or query to run, then select Proceed.

  3. Configure the required fields for the chosen analysis, optionally choose a warehouse size, then select Run.

  4. Optionally provide a name for the analysis. If you want the analysis to run periodically, choose a time interval, otherwise choose Off to run your analysis only once.

  5. Select Save to start your analysis running.

  6. Select View all analyses to see your list of scheduled, running, and complete analyses, including the one you just created.

  7. When the analysis completes, select it to see the results.