Editing and running Notebooks in Workspaces¶

Set the execution context¶

Notebooks in Workspaces do not automatically set a database or schema. To query data, you must define the execution context in a cell using the following SQL commands:

USE DATABASE <database>;
USE SCHEMA <schema>;
Copy

To ensure notebooks run consistently across environments and clients, use fully qualified names for tables and other objects. For example:

-- Query data objects using a fully qualified name
SELECT * FROM TABLE <database_name.schema_name.table_name>;

-- Create a table using a fully qualified name
WITH filtered_events AS (
    SELECT
        user_id,
        event_type,
        event_timestamp
    FROM raw_events
    WHERE event_timestamp >= '2025-01-01'
)
CREATE OR REPLACE TABLE <database_name.schema_name.table_name> AS
SELECT *
FROM filtered_events;
Copy

Use the role and warehouse picker¶

You can set the active role and warehouse using the picker at the top left of the Notebooks editor or by running the following SQL commands:

USE ROLE <role>;
USE WAREHOUSE <warehouse>;
Copy

The query warehouse is used to run SQL queries and Snowpark pushdown compute invoked by the notebook. It is also used to render the interactive datagrid, but there is no credit charge for this operation.

To learn more about credit usage, see Setting up compute.

Run cells¶

There are four supported execution options:

  • Run all cells

  • Run one single cell

  • Run current cell and all above cells (via the cell’s ellipsis menu)

  • Run current cell and all below cells (via the cell’s ellipsis menu)

Cancel cell execution¶

Use Stop at the top of the notebook or Cancel execution in a cell.

Both actions stop the currently executing cell and any queued cells triggered by Run all.

Note

The Run all button may temporarily change to Stop when the notebook is connecting or reconnecting to the service.

Cell names¶

You can assign names to cells to make navigation easier and provide contextual labels.

If an imported .ipynb file already contains name or title metadata, those values are used automatically.

Cell referencing¶

Bidirectional SQL to Python cell referencing allows you to reuse results and variables across cells in either language, enabling seamless transitions between SQL and Python workflows.

Referencing SQL cell results¶

Each SQL cell exposes its result as a pandas DataFrame pointer named dataframe_x.

  • In SQL, reference it using double curly braces: {{dataframe_1}}.

  • In Python, reference it directly as a pandas DataFrame: dataframe_1.

Referencing Python variables¶

To reference Python variables in SQL queries, wrap them in double curly braces. For example:

SELECT * FROM {{uploaded_df}} WHERE "price" > 326;
Copy

DataFrame variables are also supported when referencing Python variables in SQL.

Example workflow¶

Python cell

import pandas as pd

uploaded_df = pd.read_csv("../data/diamonds.csv")
uploaded_df
Copy

SQL cell referencing Python variable

SELECT * FROM {{uploaded_df}} WHERE "price" > 326;
Copy

SQL cell referencing SQL cell results

The result of a SQL cell provides a DataFrame pointer called dataframe_1. You can reference it in another SQL query:

SELECT * FROM {{dataframe_1}} WHERE "carat" < 1.0
UNION ALL
SELECT * FROM {{dataframe_2}} WHERE "carat" >= 1.0;
Copy

Interactive datagrid¶

The datagrid supports:

  • Scrolling

  • Search

  • Filtering

  • Sorting

  • Chart creation without code

Built-in chart builder¶

Provides a consistent user experience for data manipulation and visualization across editing surfaces in Workspaces.

Minimap and cell status¶

The minimap generates a table of contents from Markdown headers and displays a comprehensive in-session status for each cell (running, succeeded, failed, and modified).

Notebook kernel¶

The notebook kernel remains active as long as the notebook service is in the RUNNING state, allowing uninterrupted execution of critical, long-running processes such as ML training and data engineering jobs.

Actions that do not affect kernel execution:

  • Navigating to other pages

  • Working elsewhere in Snowsight

  • Closing your browser

  • Shutting down your computer

You can shut down or restart the kernel using the Connected dropdown.

If the notebook service is suspended, the notebook kernel is also shut down. For more information, see Setting up compute.

Jupyter magics¶

Notebooks in Workspaces run the IPython (Interactive Python) kernel and provide standard Jupyter cell and line magics. Run %lsmagic to view available magics.

Using the terminal¶

The terminal can be used to:

  • Install dependencies

  • Manage files

  • Run parallel jobs

  • Monitor compute resource usage

You must be connected to a notebook service to use the terminal. Switching to a different service will restart the terminal session.

Example for installing and running htop for monitoring compute resource usage in real time:

# If installation fails, run `apt update` first
# Install `htop`
apt install htop

# Run `htop`
htop
Copy