Analyzing the Query History

The Query History page enables users to view and drill into the details of queries executed in the last 14 days.

In this Topic:

Accessing the Query History

Navigation:

Compute » Query History

A user can also access the Query History page from the Query Details pane for a query in a Snowsight worksheet. Click the actions () button » View in Query History.

Exploring the Query History

The Query History page has the following features:

Query History page
  1. Count of queries that match the current filters.

  2. Controls to manage the display:

    • Drop-down lists to filter the query history by status, user, and additional filters.

    • Drop-down list to add or remove columns from the query history table.

    • Search box and drop-down list to change the active role for viewing the query history.

    • Refresh button to update the table with the queries that were initiated or changed status since the table was refreshed last.

  3. Lists of currently active warehouses, failed queries, and long running queries that the adhere to the current filters and active role.

  4. Table of queries and other SQL statements initiated in the last 14 days.

    Add or remove columns by clicking on the Columns button in the upper-right corner of the page and choosing the columns to display.

    Click on any row to view the query profile for the query.

Using the Query Profile

Navigation:

Monitor » Query History » Click on the row for the query you are interested in.

Query Profile provides a graphical representation of the main components of the processing plan for a selected query, with statistics for each component, along with details and statistics for the overall query.

Query Profile is a powerful tool for understanding the mechanics of queries. It can be used whenever you need to know more about the performance or behavior of a particular query. It is designed to help you spot typical mistakes in SQL query expressions to identify potential performance bottlenecks and improvement opportunities.

Query Profile Interface

This topic refers to the following SQL query example, which joins two tables:

SELECT sum(j)
FROM x JOIN y USING (i)
WHERE j > 300
  AND i < (SELECT AVG(j) FROM x);

The following screenshot shows the profile for this query:

Step 1

The interface consists of the following main elements:

Steps

If the query was processed in multiple steps, you can toggle between each step.

Operator tree

The middle pane displays a graphical representation of all the operator nodes for the selected step, including the relationships between each operator node.

Query details and profile overview

The panes on the right side of the page display an overview of the query profile that displays a set of execution statistics and metrics. The display changes to operator details when an operator node is selected.

Steps

Queries are often processed in multiple steps. For example, our sample query was processed in 2 steps:

  • Step 1 computed the sum of column x.j.

  • Step 2 used this intermediate result to compute the final query result.

Query Profile displays each processing step in a separate pane. You can switch between panes by clicking the respective step. For our sample query, clicking Step 2 changes the view to:

Step 2

Query Execution Plan

The tree provides a graphical representation of the query execution plan. The execution plan is composed of objects that represent rowset operators. Arrows between the operators indicate the rowsets that flow out of one operator and into another.

  • Rowset operators are the functional building blocks of a query. They are responsible for different aspects of data management and processing, including data access, transformations, and updates.

    Each operator node in the tree includes some basic attributes:

    <Type> [#]

    Operator type and ID number. ID can be used to uniquely identify an operator within a query profile (e.g. Aggregate [5] and Join [11] in the screenshot above).

    For descriptions of all the types, see Attributes below.

    Percentage

    Fraction of time that this operator consumed within the query step (e.g. 25% for Aggregate [5]). This information is also reflected in the orange bar at the bottom of the operator node, allowing for easy visual identification of performance-critical operators.

    Label

    Operator-specific additional information (e.g. SUM(X.J) for Aggregate [5]).

  • Arrows represent the data flowing between each operator node. Each arrow provides the number of records that were processed (e.g. 4 from Join [11] to Aggregate [5]).

Query Details

The pane provides detailed information about the query, including its status and duration, the number of rows returned, as well as the executing user and warehouse information.

Most Expensive Nodes

The pane lists all nodes that lasted for 1% or longer of the total execution time of the query (or the execution time for the displayed query step, if the query was executed in multiple processing steps). The pane lists nodes by execution time in descending order, enabling users to quickly locate the costliest operator nodes in terms of execution time.

Profile Overview

The pane provides information about which processing tasks consumed query time. Execution time provides information about “where the time was spent” during the processing of a query. Time spent can be broken down into the following categories:

  • Processing — time spent on data processing by the CPU.

  • Local Disk IO — time when the processing was blocked by local disk access.

  • Remote Disk IO — time when the processing was blocked by remote disk access.

  • Network Communication — time when the processing was waiting for the network data transfer.

  • Synchronization — various synchronization activities between participating processes.

  • Initialization — time spent setting up the query processing.

Statistics

A major source of information provided in the detail pane is the various statistics, grouped in the following sections:

  • IO — information about the input-output operations performed during the query:

    • Scan progress — the percentage of data scanned for a given table so far.

    • Bytes scanned — the number of bytes scanned so far.

    • Percentage scanned from cache — the percentage of data scanned from the local disk cache.

    • Bytes written — bytes written (e.g. when loading into a table).

    • Bytes written to result — bytes written to a result object.

    • Bytes read from result — bytes read from a result object.

    • External bytes scanned — bytes read from an external object, e.g. a stage.

  • DML — statistics for Data Manipulation Language (DML) queries:

    • Number of rows inserted — number of rows inserted into a table (or tables).

    • Number of rows updated — number of rows updated in a table.

    • Number of rows deleted — number of rows deleted from a table.

    • Number of rows unloaded — number of rows unloaded during data export.

    • Number of bytes deleted — number of bytes deleted from a table.

  • Pruning — information on the effects of table pruning:

    • Partitions scanned — number of partitions scanned so far.

    • Partitions total — total number of partitions in a given table.

  • Spilling — information about disk usage for operations where intermediate results do not fit in memory:

    • Bytes spilled to local storage — volume of data spilled to local disk.

    • Bytes spilled to remote storage — volume of data spilled to remote disk.

  • Network — network communication:

    • Bytes sent over the network — amount of data sent over the network.

  • External Functions — information about calls to external functions:

    The following statistics are shown for each external function called by the SQL statement. If the same function was called more than once from the same SQL statement, then the statistics are aggregated.

    • Total invocations — The number of times that an external function was called. (This can be different from the number of external function calls in the text of the SQL statement due to the number of batches that rows are divided into, the number of retries (if there are transient network problems), etc.)

    • Rows sent — The number of rows sent to external functions.

    • Rows received — The number of rows received back from external functions.

    • Bytes sent (x-region) — The number of bytes sent to external functions. If the label includes “(x-region)”, the data was sent across regions (which can impact billing).

    • Bytes received (x-region) — The number of bytes received from external functions. If the label includes “(x-region)”, the data was sent across regions (which can impact billing).

    • Retries due to transient errors — The number of retries due to transient errors.

    • Average latency per call — The average amount of time per invocation (call) between the time Snowflake sent the data and received the returned data.

    If the value of a field, for example “Retries due to transient errors”, is zero, then the field is not displayed.

Attributes

The following sections provide a list of the most common operator types and their attributes:

Data Access and Generation Operators

TableScan

Represents access to a single table. Attributes:

  • Full table name — the name of the accessed table, including database and schema.

  • Columns — list of scanned columns

  • Table alias — used table alias, if present

  • Extracted Variant paths — list of paths extracted from VARIANT columns

ValuesClause

List of values provided with the VALUES clause. Attributes:

  • Number of values — the number of produced values.

  • Values — the list of produced values.

Generator

Generates records using the TABLE(GENERATOR(...)) construct. Attributes:

  • rowCount — provided rowCount parameter.

  • timeLimit — provided timeLimit parameter.

ExternalScan

Represents access to data stored in stage objects. Can be a part of queries that scan data from stages directly, but also for data loading operations (i.e. COPY statements).

Attributes:

  • Stage name — the name of the stage where the data is read from.

  • Stage type — the type of the stage (e.g. TABLE STAGE).

InternalObject

Represents access to an internal data object (e.g. an Information Schema table or the result of a previous query). Attributes:

  • Object Name — the name or type of the accessed object.

Data Processing Operators

Filter

Represents an operation that filters the records. Attributes:

  • Filter condition - the condition used to perform filtering.

Join

Combines two inputs on a given condition. Attributes:

  • Join Type — Type of join (e.g. INNER, LEFT OUTER, etc.).

  • Equality Join Condition — for joins which use equality-based conditions, it lists the expressions used for joining elements.

  • Additional Join Condition — some joins use conditions containing non-equality based predicates. They are listed here.

Note

Non-equality join predicates might result in significantly slower processing speeds and should be avoided if possible.

Aggregate

Groups input and computes aggregate functions. Can represent SQL constructs such as GROUP BY, as well as SELECT DISTINCT. Attributes:

  • Grouping Keys — if GROUP BY is used, this lists the expressions we group by.

  • Aggregate Functions — list of functions computed for each aggregate group, e.g. SUM.

GroupingSets

Represents constructs such as GROUPING SETS, ROLLUP and CUBE. Attributes:

  • Grouping Key Sets — list of grouping sets

  • Aggregate Functions — list of functions computed for each group, e.g. SUM.

WindowFunction

Computes window functions. Attributes:

  • Window Functions — list of window functions computed.

Sort

Orders input on a given expression. Attributes:

  • Sort keys — expression defining the sorting order.

SortWithLimit

Produces a part of the input sequence after sorting, typically a result of an ORDER BY ... LIMIT ... OFFSET ... construct in SQL.

Attributes:

  • Sort keys — expression defining the sorting order.

  • Number of rows — number of rows produced.

  • Offset — position in the ordered sequence from which produced tuples are emitted.

Flatten

Processes VARIANT records, possibly flattening them on a specified path. Attributes:

  • input — the input expression used to flatten the data.

JoinFilter

Special filtering operation that removes tuples that can be identified as not possibly matching the condition of a Join further in the query plan. Attributes:

  • Original join ID — the join used to identify tuples that can be filtered out.

UnionAll

Concatenates two inputs. Attributes: none.

ExternalFunction

Represents processing by an external function.

DML Operators

Insert

Adds records to a table either through an INSERT or COPY operation. Attributes:

  • Input expressions — which expressions are inserted.

  • Table names — names of tables that records are added to.

Delete

Removes records from a table. Attributes:

  • Table name — the name of the table that records are deleted from.

Update

Updates records in a table. Attributes:

  • Table name — the name of the updated table.

Merge

Performs a MERGE operation on a table. Attributes:

  • Full table name — the name of the updated table.

Unload

Represents a COPY operation that exports data from a table into a file in a stage. Attributes:

  • Location - the name of the stage where the data is saved.

Metadata Operators

Some queries include steps that are pure metadata/catalog operations rather than data-processing operations. These steps consist of a single operator. Some examples include:

DDL and Transaction Commands

Used for creating or modifying objects, session, transactions, etc. Typically, these queries are not processed by a virtual warehouse and result in a single-step profile that corresponds to the matching SQL statement. For example:

CREATE DATABASE | SCHEMA | …

ALTER DATABASE | SCHEMA | TABLE | SESSION | …

DROP DATABASE | SCHEMA | TABLE | …

COMMIT

Table Creation Command

DDL command for creating a table. For example:

CREATE TABLE

Similar to other DDL commands, these queries result in a single-step profile; however, they can also be part of a multi-step profile, such as when used in a CTAS statement. For example:

CREATE TABLE … AS SELECT …

Query Result Reuse

A query that reuses the result of a previous query.

Metadata-based Result

A query whose result is computed based purely on metadata, without accessing any data. These queries are not processed by a virtual warehouse. For example:

SELECT COUNT(*) FROM …

SELECT CURRENT_DATABASE()

Miscellaneous Operators

Result

Returns the query result. Attributes:

  • List of expressions - the expressions produced.

Common Query Problems Identified by Query Profile

This section describes some of the problems you can identify and troubleshoot using Query Profile.

“Exploding” Joins

One of the common mistakes SQL users make is joining tables without providing a join condition (resulting in a “Cartesian Product”), or providing a condition where records from one table match multiple records from another table. For such queries, the Join operator produces significantly (often by orders of magnitude) more tuples than it consumes.

This can be observed by looking at the number of records produced by a Join operator, and typically is also reflected in Join operator consuming a lot of time.

The following example shows input in the hundreds of records but output in the hundreds of thousands:

SELECT tt1.c1, tt1.c2
FROM tt1
JOIN tt2 ON tt1.c1 = tt2.c1
 AND tt1.c2 = tt2.c2;
../_images/ui-profile-issues-exploding-joins.png

UNION Without ALL

In SQL, it is possible to combine two sets of data with either UNION or UNION ALL constructs. The difference between them is that UNION ALL simply concatenates inputs, while UNION does the same, but also performs duplicate elimination.

A common mistake is to use UNION when the UNION ALL semantics are sufficient. These queries show in Query Profile as a UnionAll operator with an extra Aggregate operator on top (which performs duplicate elimination).

Queries Too Large to Fit in Memory

For some operations (e.g. duplicate elimination for a huge data set), the amount of memory available for the servers used to execute the operation might not be sufficient to hold intermediate results. As a result, the query processing engine will start spilling the data to local disk. If the local disk space is not sufficient, the spilled data is then saved to remote disks.

This spilling can have a profound effect on query performance (especially if remote disk is used for spilling). To alleviate this, we recommend:

  • Using a larger warehouse (effectively increasing the available memory/local disk space for the operation), and/or

  • Processing data in smaller batches.

Inefficient Pruning

Snowflake collects rich statistics on data allowing it not to read unnecessary parts of a table based on the query filters. However, for this to have an effect, the data storage order needs to be correlated with the query filter attributes.

The efficiency of pruning can be observed by comparing Partitions scanned and Partitions total statistics in the TableScan operators. If the former is a small fraction of the latter, pruning is efficient. If not, the pruning did not have an effect.

Of course, pruning can only help for queries that actually filter out a significant amount of data. If the pruning statistics do not show data reduction, but there is a Filter operator above TableScan which filters out a number of records, this might signal that a different data organization might be beneficial for this query.

For more information about pruning, see Understanding Snowflake Table Structures.