The Activity area of Snowsight, the Snowflake web interface, lets you:
Monitor queries executed by users in your account.
View details about queries.
View performance data.
Explore each step of an executed query.
Monitor the status of data loading using COPY INTO (bulk loading) and pipes (continuous loading).
View details about instances of bulk and continuous data loading.
Navigate to the copy history for individual tables.
This topic describes how to use the Activity area to understand your Snowflake account activity.
Using the Activity Area¶
This section gives a brief overview of how to use the Activity area of Snowsight.
The Activity area includes the following pages:
The Query History page lets you view and drill into the details of queries executed in your Snowflake account in the last 14 days. Due to the data retention policy for sessions, the Query History does not list the user who executed a query if that query ran more than 7 days ago.
If you need to obtain information for older queries, try executing a query against the QUERY_HISTORY view in the Account Usage schema.
Within a worksheet, you can see the query history for queries that have been run in that worksheet. See View Query History.
This section provides a brief overview of how to navigate and use the Query History page.
Use the filters, located in the upper-right corner of the page, to filter the table. You can filter by:
Use Filters, located in the upper-right corner of the page, to set additional filters for queries. You can filter by:
Use Columns, located in the upper-right corner of the page, to add or remove columns from the table. The queries table includes the following columns:
In the upper-right corner of the page, turn Auto Refresh on. When Auto Refresh is on, the table refreshes every ten seconds.
The queries table includes the following columns by default:
You can add additional columns.
To view additional details about a query, select a query in the table.
When you select a query in the queries table, the query page appears.
This section provides a brief overview of how to navigate and use the query page.
Basic details about the query are located at the top of the page.
The query page includes the following tabs:
The Query Details tab provides the results of the query along with additional information. The Query Details tab is the default tab for the query page. The Query Details tab provides the following sections:
The Details section provides additional information about the query execution, including:
The status of the query.
When the query started.
When the query ended.
The duration of the query.
The query ID.
The session ID.
The driver status.
The client driver.
The SQL Text section provides the actual text of the query. Hover over the SQL text to open the text in a worksheet or copy the text.
The Results section provides the results of the query. Select Export Query to export the results as a .csv file.
The Query Profile tab lets you explore the query execution plan and understand granular details about each step of execution.
The query profile is a powerful tool for understanding the mechanics of queries. It can be used whenever you need to know more about the performance or behavior of a particular query. It is designed to help you spot typical mistakes in SQL query expressions to identify potential performance bottlenecks and improvement opportunities.
This section provides a brief overview of how to navigate and use the query profile.
The query execution plan appears at the center of the query profile.
The query execution plan is composed of operator nodes, which represent rowset operators.
Arrows between operator nodes indicate the rowsets that flow out of one operator and into another.
Each operator node includes the following:
In the upper-left corner of the query profile, use the buttons to:
Steps only appear if the query was executed in steps.
The query profile provides various information panes. The panes appear in the query execution plan. The panes that appear depend on the focus of the query execution plan.
The query profile includes the following information panes:
To learn more about the information provided by the panes, see Query History Reference.
Query History Reference¶
This section describes all items that can appear in each information pane. The exact content of the information panes depends on the context of the query execution plan.
The pane provides information about which processing tasks consumed query time. Execution time provides information about “where the time was spent” during the processing of a query. Time spent can be broken down into the following categories:
Processing — time spent on data processing by the CPU.
Local Disk IO — time when the processing was blocked by local disk access.
Remote Disk IO — time when the processing was blocked by remote disk access.
Network Communication — time when the processing was waiting for the network data transfer.
Synchronization — various synchronization activities between participating processes.
Initialization — time spent setting up the query processing.
A major source of information provided in the detail pane is the various statistics, grouped in the following sections:
IO — information about the input-output operations performed during the query:
Scan progress — the percentage of data scanned for a given table so far.
Bytes scanned — the number of bytes scanned so far.
Percentage scanned from cache — the percentage of data scanned from the local disk cache.
Bytes written — bytes written (e.g. when loading into a table).
Bytes written to result — bytes written to the result object. For example,
select * from . . .would produce a set of results in tabular format representing each field in the selection. In general, the results object represents whatever is produced as a result of the query, and Bytes written to result represents the size of the returned result.
Bytes read from result — bytes read from the result object.
External bytes scanned — bytes read from an external object, e.g. a stage.
DML — statistics for Data Manipulation Language (DML) queries:
Number of rows inserted — number of rows inserted into a table (or tables).
Number of rows updated — number of rows updated in a table.
Number of rows deleted — number of rows deleted from a table.
Number of rows unloaded — number of rows unloaded during data export.
Pruning — information on the effects of table pruning:
Partitions scanned — number of partitions scanned so far.
Partitions total — total number of partitions in a given table.
Spilling — information about disk usage for operations where intermediate results do not fit in memory:
Bytes spilled to local storage — volume of data spilled to local disk.
Bytes spilled to remote storage — volume of data spilled to remote disk.
Network — network communication:
Bytes sent over the network — amount of data sent over the network.
External Functions — information about calls to external functions:
The following statistics are shown for each external function called by the SQL statement. If the same function was called more than once from the same SQL statement, then the statistics are aggregated.
Total invocations — number of times that an external function was called. (This can be different from the number of external function calls in the text of the SQL statement due to the number of batches that rows are divided into, the number of retries (if there are transient network problems), etc.)
Rows sent — number of rows sent to external functions.
Rows received — number of rows received back from external functions.
Bytes sent (x-region) — number of bytes sent to external functions. If the label includes “(x-region)”, the data was sent across regions (which can impact billing).
Bytes received (x-region) — number of bytes received from external functions. If the label includes “(x-region)”, the data was sent across regions (which can impact billing).
Retries due to transient errors — number of retries due to transient errors.
Average latency per call — average amount of time per invocation (call) between the time Snowflake sent the data and received the returned data.
HTTP 4xx errors — total number of HTTP requests that returned a 4xx status code.
HTTP 5xx errors — total number of HTTP requests that returned a 5xx status code.
Latency per successful call (avg) — average latency for successful HTTP requests.
Avg throttle latency overhead — average overhead per successful request due to a slowdown caused by throttling (HTTP 429).
Batches retried due to throttling — number of batches that were retried due to HTTP 429 errors.
Latency per successful call (P50) — 50th percentile latency for successful HTTP requests. 50 percent of all successful requests took less than this time to complete.
Latency per successful call (P90) — 90th percentile latency for successful HTTP requests. 90 percent of all successful requests took less than this time to complete.
Latency per successful call (P95) — 95th percentile latency for successful HTTP requests. 95 percent of all successful requests took less than this time to complete.
Latency per successful call (P99) — 99th percentile latency for successful HTTP requests. 99 percent of all successful requests took less than this time to complete.
Extension Functions — information about calls to extension functions:
Java UDF handler load time — amount of time for the Java UDF handler to load.
Total Java UDF handler invocations — number of times the Java UDF handler is invoked.
Max Java UDF handler execution time — maximum amount of time for the Java UDF handler to execute.
Avg Java UDF handler execution time — average amount of time to execute the Java UDF handler.
Java UDTF process() invocations — number of times the Java UDTF process method was invoked.
Java UDTF process() execution time — amount of time to execute the Java UDTF process.
Avg Java UDTF process() execution time — average amount of time to execute the Java UDTF process.
Java UDTF’s constructor invocations — number of times the Java UDTF constructor was invoked.
Java UDTF’s constructor execution time — amount of time to execute the Java UDTF constructor.
Avg Java UDTF’s constructor execution time — average amount of time to execute the Java UDTF constructor.
Java UDTF endPartition() invocations — number of times the Java UDTF endPartition method was invoked.
Java UDTF endPartition() execution time — amount of time to execute the Java UDTF endPartition method.
Avg Java UDTF endPartition() execution time — average amount of time to execute the Java UDTF endPartition method.
Max Java UDF dependency download time — maximum amount of time to download the Java UDF dependencies.
Max JVM memory usage — peak memory usage as reported by the JVM.
Java UDF inline code compile time in ms — compile time for the Java UDF inline code.
Total Python UDF handler invocations — number of times the Python UDF handler was invoked.
Total Python UDF handler execution time — total execution time for Python UDF handler.
Avg Python UDF handler execution time — average amount of time to execute the Python UDF handler.
Python sandbox max memory usage — peak memory usage by the Python sandbox environment.
Avg Python env creation time: Download and install packages — average amount of time to create the Python environment, including downloading and installing packages.
Conda solver time — amount of time to run the Conda solver to solve Python packages.
Conda env creation time — amount of time to create the Python environment.
Python UDF initialization time — amount of time to initialize the Python UDF.
Number of external file bytes read for UDFs — number of external file bytes read for UDFs.
Number of external files accessed for UDFs — number of external files accessed for UDFs.
If the value of a field, for example “Retries due to transient errors”, is zero, then the field is not displayed.
Most Expensive Nodes¶
The pane lists all nodes that lasted for 1% or longer of the total execution time of the query (or the execution time for the displayed query step, if the query was executed in multiple processing steps). The pane lists nodes by execution time in descending order, enabling users to quickly locate the costliest operator nodes in terms of execution time.
The following sections provide a list of the most common operator types and their attributes:
Data Access and Generation Operators¶
Represents access to a single table. Attributes:
Full table name — the name of the accessed table, including database and schema.
Columns — list of scanned columns
Table alias — used table alias, if present
Extracted Variant paths — list of paths extracted from VARIANT columns
List of values provided with the VALUES clause. Attributes:
Number of values — the number of produced values.
Values — the list of produced values.
Generates records using the
rowCount — provided rowCount parameter.
timeLimit — provided timeLimit parameter.
Represents access to data stored in stage objects. Can be a part of queries that scan data from stages directly, but also for data loading operations (i.e. COPY statements).
Stage name — the name of the stage where the data is read from.
Stage type — the type of the stage (e.g. TABLE STAGE).
Represents access to an internal data object (e.g. an Information Schema table or the result of a previous query). Attributes:
Object Name — the name or type of the accessed object.
Data Processing Operators¶
Represents an operation that filters the records. Attributes:
Filter condition - the condition used to perform filtering.
Combines two inputs on a given condition. Attributes:
Join Type — Type of join (e.g. INNER, LEFT OUTER, etc.).
Equality Join Condition — for joins which use equality-based conditions, it lists the expressions used for joining elements.
Additional Join Condition — some joins use conditions containing non-equality based predicates. They are listed here.
Non-equality join predicates might result in significantly slower processing speeds and should be avoided if possible.
Groups input and computes aggregate functions. Can represent SQL constructs such as GROUP BY, as well as SELECT DISTINCT. Attributes:
Grouping Keys — if GROUP BY is used, this lists the expressions we group by.
Aggregate Functions — list of functions computed for each aggregate group, e.g. SUM.
Represents constructs such as GROUPING SETS, ROLLUP and CUBE. Attributes:
Grouping Key Sets — list of grouping sets
Aggregate Functions — list of functions computed for each group, e.g. SUM.
Computes window functions. Attributes:
Window Functions — list of window functions computed.
Orders input on a given expression. Attributes:
Sort keys — expression defining the sorting order.
Produces a part of the input sequence after sorting, typically a result of an
ORDER BY ... LIMIT ... OFFSET ...construct in SQL.
Sort keys — expression defining the sorting order.
Number of rows — number of rows produced.
Offset — position in the ordered sequence from which produced tuples are emitted.
Processes VARIANT records, possibly flattening them on a specified path. Attributes:
input — the input expression used to flatten the data.
Special filtering operation that removes tuples that can be identified as not possibly matching the condition of a Join further in the query plan. Attributes:
Original join ID — the join used to identify tuples that can be filtered out.
Concatenates two inputs. Attributes: none.
Represents processing by an external function.
Adds records to a table either through an INSERT or COPY operation. Attributes:
Input expressions — which expressions are inserted.
Table names — names of tables that records are added to.
Removes records from a table. Attributes:
Table name — the name of the table that records are deleted from.
Updates records in a table. Attributes:
Table name — the name of the updated table.
Performs a MERGE operation on a table. Attributes:
Full table name — the name of the updated table.
Represents a COPY operation that exports data from a table into a file in a stage. Attributes:
Location - the name of the stage where the data is saved.
Some queries include steps that are pure metadata/catalog operations rather than data-processing operations. These steps consist of a single operator. Some examples include:
- DDL and Transaction Commands
Used for creating or modifying objects, session, transactions, etc. Typically, these queries are not processed by a virtual warehouse and result in a single-step profile that corresponds to the matching SQL statement. For example:
CREATE DATABASE | SCHEMA | …
ALTER DATABASE | SCHEMA | TABLE | SESSION | …
DROP DATABASE | SCHEMA | TABLE | …
- Table Creation Command
DDL command for creating a table. For example:
Similar to other DDL commands, these queries result in a single-step profile; however, they can also be part of a multi-step profile, such as when used in a CTAS statement. For example:
CREATE TABLE … AS SELECT …
- Query Result Reuse
A query that reuses the result of a previous query.
- Metadata-based Result
A query whose result is computed based purely on metadata, without accessing any data. These queries are not processed by a virtual warehouse. For example:
SELECT COUNT(*) FROM …
Returns the query result. Attributes:
List of expressions - the expressions produced.
Common Query Problems Identified by Query Profile¶
This section describes some of the problems you can identify and troubleshoot using Query Profile.
One of the common mistakes SQL users make is joining tables without providing a join condition (resulting in a “Cartesian product”), or providing a condition where records from one table match multiple records from another table. For such queries, the Join operator produces significantly (often by orders of magnitude) more tuples than it consumes.
This can be observed by looking at the number of records produced by a Join operator, and typically is also reflected in Join operator consuming a lot of time.
UNION Without ALL¶
In SQL, it is possible to combine two sets of data with either UNION or UNION ALL constructs. The difference between them is that UNION ALL simply concatenates inputs, while UNION does the same, but also performs duplicate elimination.
A common mistake is to use UNION when the UNION ALL semantics are sufficient. These queries show in Query Profile as a UnionAll operator with an extra Aggregate operator on top (which performs duplicate elimination).
Queries Too Large to Fit in Memory¶
For some operations (e.g. duplicate elimination for a huge data set), the amount of memory available for the servers used to execute the operation might not be sufficient to hold intermediate results. As a result, the query processing engine will start spilling the data to local disk. If the local disk space is not sufficient, the spilled data is then saved to remote disks.
This spilling can have a profound effect on query performance (especially if remote disk is used for spilling). To alleviate this, we recommend:
Using a larger warehouse (effectively increasing the available memory/local disk space for the operation), and/or
Processing data in smaller batches.
Snowflake collects rich statistics on data allowing it not to read unnecessary parts of a table based on the query filters. However, for this to have an effect, the data storage order needs to be correlated with the query filter attributes.
The efficiency of pruning can be observed by comparing Partitions scanned and Partitions total statistics in the TableScan operators. If the former is a small fraction of the latter, pruning is efficient. If not, the pruning did not have an effect.
Of course, pruning can only help for queries that actually filter out a significant amount of data. If the pruning statistics do not show data reduction, but there is a Filter operator above TableScan which filters out a number of records, this might signal that a different data organization might be beneficial for this query.
For more information about pruning, see Understanding Snowflake Table Structures.
The Copy History page lets you view data loading activity that has occurred over the last 365 days for all tables in your account. The Copy History page provides a detailed table of data loads for your tables.
The Copy History page includes the following:
The Copies Over Time graph.
The Copies table.
The activity on the Copy History page includes bulk data loading performed using COPY INTO and continuous data loading using pipes.
The data loading history comes from the COPY_HISTORY Account Usage view. This view requires access to the Snowflake database. For more information, see Enabling Account Usage for Other Roles.
This section provides a brief overview of how to navigate and use the Copy History page.
The Copies Over Time graph provides a visualization of data loading over a given period of time. By default, the graph shows a 28-day history.
The bars on the graph represent days. Select a bar on the graph to filter the Copies table by that day.
In the upper-right corner of the page, filter the Copies Over Time graph and the Copies table. You can filter by:
The Copy History page requires a warehouse. Snowsight uses the default warehouse for your user profile.
If you can’t see the expected content, or want to switch warehouses, in the upper-right corner of the page, select a warehouse.
In the upper-right corner of the Copies table, search the content of the following columns:
In the upper-right corner of the Copies table, open a worksheet that contains the SQL query used to populate the Copy History page.
The SQL query is based on the filters you select.
In each row of the Copies table, copy the path to the stage for the corresponding data load.
The Copies table includes the following columns:
The Copies table provides error information if a data load fails. To understand why a data load failed, in the status column, hover over the Failed status for the data load.
To view the copy history for a specific table, select a data load in the Copies table. You are sent to the copy history for the table that was the target of the data load.