Analyzing Queries Using Query Profile

Query Profile, available through the Classic Console, provides execution details for a query. For the selected query, it provides a graphical representation of the main components of the processing plan for the query, with statistics for each component, along with details and statistics for the overall query.

Introduction

When to Use Query Profile

Query Profile is a powerful tool for understanding the mechanics of queries. It can be used whenever you want or need to know more about the performance or behavior of a particular query. It is designed to help you spot typical mistakes in SQL query expressions to identify potential performance bottlenecks and improvement opportunities.

Additional high-level information about individual queries can be viewed in various columns in the Worksheets Worksheet tab and History History tab pages.

How to Access Query Profile

Query Profile is accessed from the detail page for a query. As such, you can access Query Profile from any page where the Query ID column is displayed and query IDs can be clicked on, specifically:

  • Worksheets Worksheet tab

  • History History tab

Note

If the Query ID column is not displayed on these pages, click the dropdown next to one of the column headers on the page and, in the list of Columns, select Query ID.

To access the profile for a query:

  1. In the History History tab or Worksheets Worksheet tab page, click on a query ID.

  2. The detail page for the query is displayed:

    ../_images/ui-profile-detail.png
  3. Click the Profile tab.

  4. If the query has a profile, it is displayed (see screenshots below).

Query Profile Interface

For the purpose of this topic, we are using a basic sample SQL query that joins two tables:

select sum(j)
from x join y using (i)
where j > 300
and i < (select avg(j) from x);
Copy

The following screenshot shows the profile for this query:

../_images/ui-profile-step1.png

The interface consists of the following main elements:

Steps:

If the query was processed in multiple steps, you can toggle between each step.

Operator tree:

The middle pane displays a graphical representation of all the operator nodes for the selected step, including the relationships between each operator node.

Node list:

The middle pane includes a collapsible list of operator nodes by execution time.

Overview:

The right pane displays an overview of the query profile. The display changes to operator details when an operator node is selected.

Steps

Queries are often processed in multiple steps. For example, our sample query was processed in 2 steps:

  • Step 1 computed the average of column x.j.

  • Step 2 used this intermediate result to compute the final query result.

Query Profile displays each processing step in a separate panel. You can switch between panels by clicking the respective step. For our sample query, clicking Step 2 changes the view to:

../_images/ui-profile-step2.png

Operator Tree

The tree provides a graphical representation of the operator nodes that comprise a query and the links that connect each operator:

  • Operators are the functional building blocks of a query. They are responsible for different aspects of data management and processing, including data access, transformations and updates. Each operator node in the tree includes some basic attributes:

    <Type> [#]:

    Operator type and ID number. ID can be used to uniquely identify an operator within a query profile (e.g. Aggregate [5] and Join [11] in the screenshot above).

    For descriptions of all the types, see Operator Types below.

    Percentage:

    Fraction of time that this operator consumed within the query step (e.g. 25% for Aggregate [5]). This information is also reflected in the orange bar at the bottom of the operator node, allowing for easy visual identification of performance-critical operators.

    Label:

    Operator-specific additional information (e.g. SUM(X.J) for Aggregate [5]).

  • Links represent the data flowing between each operator node. Each link provides the number of records that were processed (e.g. 41.95M from Join [11] to Aggregate [5]).

Note

If you use a touch screen and the operator tree is not displayed, the touch events interface for your touch screen might be interfering. For instructions to temporarily disable the interface, refer to the discussion in the Snowflake Community forum: Query profile window shows the overview, but does not show the query tree..

Operator Nodes by Execution Time

A collapsible panel in the operator tree pane lists nodes by execution time in descending order, enabling users to quickly locate the costliest operator nodes in terms of execution time. The panel lists all nodes that lasted for 1% or longer of the total execution time of the query (or the execution time for the displayed query step, if the query was executed in multiple processing steps).

Clicking on a node in the list centers the operator tree on the selected node.

The following screenshot shows the panel after clicking the Aggregate [5] operator:

../_images/ui-profile-nodes-by-execution-time.png

Profile Overview / Operator Details

The overview/detail pane on the right provides information about the selected components (operators and links) in the tree on the left. The information displayed depends on whether a node in the operator tree is selected:

  • Initially, no node in the tree is selected, so the panel shows overview information for the current step.

  • When a component is selected by clicking on the node, the panel shows information for the component.

Note

After clicking on a node, to return to the step-level overview information, simply deselect the node by clicking on any empty space around the operator tree.

The overview/detail pane is divided into 3 sections:

Execution Time:

Provides information about which processing tasks consumed query time (described in Query/Operator Details below). Additionally, for step-level information, it shows the state of the given step, and its execution time.

Statistics:

Provides detailed information about various statistics (described in Query/Operator Details below).

Attributes:

Provides component-specific information (described in Operator Types below).

The following screenshot shows the details after clicking the Join [11] operator:

../_images/ui-profile-operator.png

Operator Types

The following sections provide a list of the most common operator types and their attributes.

Data Access and Generation Operators

TableScan:

Represents access to a single table. Attributes:

  • Full table name — the name of the accessed table, including database and schema.

  • Columns — list of scanned columns

  • Table alias — used table alias, if present

  • Extracted Variant paths — list of paths extracted from VARIANT columns

ValuesClause:

List of values provided with the VALUES clause. Attributes:

  • Number of values — the number of produced values.

  • Values — the list of produced values.

Generator:

Generates records using the TABLE(GENERATOR(...)) construct. Attributes:

  • rowCount — provided rowCount parameter.

  • timeLimit — provided timeLimit parameter.

ExternalScan:

Represents access to data stored in stage objects. Can be a part of queries that scan data from stages directly, but also for data-loading COPY queries. Attributes:

  • Stage name — the name of the stage where the data is read from.

  • Stage type — the type of the stage (e.g. TABLE STAGE).

InternalObject:

Represents access to an internal data object (e.g. an Information Schema table or the result of a previous query). Attributes:

  • Object Name — the name or type of the accessed object.

Data Processing Operators

Filter:

Represents an operation that filters the records. Attributes:

  • Filter condition - the condition used to perform filtering.

Join:

Combines two inputs on a given condition. Attributes:

  • Join Type — Type of join (e.g. INNER, LEFT OUTER, etc.).

  • Equality Join Condition — for joins which use equality-based conditions, it lists the expressions used for joining elements.

  • Additional Join Condition — some joins use conditions containing non-equality based predicates. They are listed here.

Note

Non-equality join predicates might result in significantly slower processing speeds and should be avoided if possible.

Aggregate:

Groups input and computes aggregate functions. Can represent SQL constructs such as GROUP BY, as well as SELECT DISTINCT. Attributes:

  • Grouping Keys — if GROUP BY is used, this lists the expressions we group by.

  • Aggregate Functions — list of functions computed for each aggregate group, e.g. SUM.

GroupingSets:

Represents constructs such as GROUPING SETS, ROLLUP and CUBE. Attributes:

  • Grouping Key Sets — list of grouping sets

  • Aggregate Functions — list of functions computed for each group, e.g. SUM.

WindowFunction:

Computes window functions. Attributes:

  • Window Functions — list of window functions computed.

Sort:

Orders input on a given expression. Attributes:

  • Sort keys — expression defining the sorting order.

SortWithLimit:

Produces a part of the input sequence after sorting, typically a result of an ORDER BY ... LIMIT ... OFFSET ... construct in SQL. Attributes:

  • Sort keys — expression defining the sorting order.

  • Number of rows — number of rows produced.

  • Offset — position in the ordered sequence from which produced tuples are emitted.

Flatten:

Processes VARIANT records, possibly flattening them on a specified path. Attributes:

  • input — the input expression used to flatten the data.

JoinFilter:

Special filtering operation that removes tuples that can be identified as not possibly matching the condition of a Join further in the query plan. Attributes:

  • Original join ID — the join used to identify tuples that can be filtered out.

UnionAll:

Concatenates two inputs. Attributes: none.

ExternalFunction:

Represents processing by an external function.

DML Operators

Insert:

Adds records to a table either through an INSERT or COPY operation. Attributes:

  • Input expressions — which expressions are inserted.

  • Full table name — name of single target table that records are added to.

  • Full table names — names of multiple target tables that records are added to.

Delete:

Removes records from a table. Attributes:

  • Full table name — the name of the table that records are deleted from.

Update:

Updates records in a table. Attributes:

  • Full table name — the name of the updated table.

Merge:

Performs a MERGE operation on a table. Attributes:

  • Full table name — the name of the updated table.

Unload:

Represents a COPY operation that exports data from a table into a file in a stage. Attributes:

  • Location - the name of the stage where the data is saved.

Metadata Operators

Some queries include steps that are pure metadata/catalog operations rather than data-processing operations. These steps consist of a single operator. Some examples include:

DDL and Transaction Commands:

Used for creating or modifying objects, session, transactions, etc. Typically, these queries are not processed by a virtual warehouse and result in a single-step profile that corresponds to the matching SQL statement. For example:

CREATE DATABASE | SCHEMA | …

ALTER DATABASE | SCHEMA | TABLE | SESSION | …

DROP DATABASE | SCHEMA | TABLE | …

COMMIT

Table Creation Command:

DDL command for creating a table. For example:

CREATE TABLE

Similar to other DDL commands, these queries result in a single-step profile; however, they can also be part of a multi-step profile, such as when used in a CTAS statement. For example:

CREATE TABLE … AS SELECT …

Query Result Reuse:

A query that reuses the result of a previous query.

Metadata-based Result:

A query whose result is computed based purely on metadata, without accessing any data. These queries are not processed by a virtual warehouse. For example:

SELECT COUNT(*) FROM …

SELECT CURRENT_DATABASE()

Miscellaneous Operators

Result:

Returns the query result. Attributes:

  • List of expressions - the expressions produced.

Query/Operator Details

To help you analyze query performance, the detail panel provides two classes of profiling information:

  • Execution time, broken down into categories

  • Detailed statistics

In addition, attributes are provided for each operator (described in Operator Types in this topic).

Execution Time

Execution time provides information about “where the time was spent” during the processing of a query. Time spent can be broken down into the following categories, displayed in the following order:

  • Processing — time spent on data processing by the CPU.

  • Local Disk IO — time when the processing was blocked by local disk access.

  • Remote Disk IO — time when the processing was blocked by remote disk access.

  • Network Communication — time when the processing was waiting for the network data transfer.

  • Synchronization — various synchronization activities between participating processes.

  • Initialization — time spent setting up the query processing.

Statistics

A major source of information provided in the detail pane is the various statistics, grouped in the following sections:

  • IO — information about the input-output operations performed during the query:

    • Scan progress — the percentage of data scanned for a given table so far.

    • Bytes scanned — the number of bytes scanned so far.

    • Percentage scanned from cache — the percentage of data scanned from the local disk cache.

    • Bytes written — bytes written (e.g. when loading into a table).

    • Bytes written to result — bytes written to the result object. For example, select * from . . . would produce a set of results in tabular format representing each field in the selection. In general, the results object represents whatever is produced as a result of the query, and Bytes written to result represents the size of the returned result.

    • Bytes read from result — bytes read from the result object.

    • External bytes scanned — bytes read from an external object, e.g. a stage.

  • DML — statistics for Data Manipulation Language (DML) queries:

    • Number of rows inserted — number of rows inserted into a table (or tables).

    • Number of rows updated — number of rows updated in a table.

    • Number of rows deleted — number of rows deleted from a table.

    • Number of rows unloaded — number of rows unloaded during data export.

  • Pruning — information on the effects of table pruning:

    • Partitions scanned — number of partitions scanned so far.

    • Partitions total — total number of partitions in a given table.

  • Spilling — information about disk usage for operations where intermediate results do not fit in memory:

    • Bytes spilled to local storage — volume of data spilled to local disk.

    • Bytes spilled to remote storage — volume of data spilled to remote disk.

  • Network — network communication:

    • Bytes sent over the network — amount of data sent over the network.

  • External Functions — information about calls to external functions:

    The following statistics are shown for each external function called by the SQL statement. If the same function was called more than once from the same SQL statement, then the statistics are aggregated.

    • Total invocations — number of times that an external function was called. (This can be different from the number of external function calls in the text of the SQL statement due to the number of batches that rows are divided into, the number of retries (if there are transient network problems), etc.)

    • Rows sent — number of rows sent to external functions.

    • Rows received — number of rows received back from external functions.

    • Bytes sent (x-region) — number of bytes sent to external functions. If the label includes “(x-region)”, the data was sent across regions (which can impact billing).

    • Bytes received (x-region) — number of bytes received from external functions. If the label includes “(x-region)”, the data was sent across regions (which can impact billing).

    • Retries due to transient errors — number of retries due to transient errors.

    • Average latency per call — average amount of time per invocation (call) between the time Snowflake sent the data and received the returned data.

    • HTTP 4xx errors — total number of HTTP requests that returned a 4xx status code.

    • HTTP 5xx errors — total number of HTTP requests that returned a 5xx status code.

    • Latency per successful call (avg) — average latency for successful HTTP requests.

    • Avg throttle latency overhead — average overhead per successful request due to a slowdown caused by throttling (HTTP 429).

    • Batches retried due to throttling — number of batches that were retried due to HTTP 429 errors.

    • Latency per successful call (P50) — 50th percentile latency for successful HTTP requests. 50 percent of all successful requests took less than this time to complete.

    • Latency per successful call (P90) — 90th percentile latency for successful HTTP requests. 90 percent of all successful requests took less than this time to complete.

    • Latency per successful call (P95) — 95th percentile latency for successful HTTP requests. 95 percent of all successful requests took less than this time to complete.

    • Latency per successful call (P99) — 99th percentile latency for successful HTTP requests. 99 percent of all successful requests took less than this time to complete.

  • Extension Functions — information about calls to extension functions:

    • Java UDF handler load time — amount of time for the Java UDF handler to load.

    • Total Java UDF handler invocations — number of times the Java UDF handler is invoked.

    • Max Java UDF handler execution time — maximum amount of time for the Java UDF handler to execute.

    • Avg Java UDF handler execution time — average amount of time to execute the Java UDF handler.

    • Java UDTF process() invocations — number of times the Java UDTF process method was invoked.

    • Java UDTF process() execution time — amount of time to execute the Java UDTF process.

    • Avg Java UDTF process() execution time — average amount of time to execute the Java UDTF process.

    • Java UDTF’s constructor invocations — number of times the Java UDTF constructor was invoked.

    • Java UDTF’s constructor execution time — amount of time to execute the Java UDTF constructor.

    • Avg Java UDTF’s constructor execution time — average amount of time to execute the Java UDTF constructor.

    • Java UDTF endPartition() invocations — number of times the Java UDTF endPartition method was invoked.

    • Java UDTF endPartition() execution time — amount of time to execute the Java UDTF endPartition method.

    • Avg Java UDTF endPartition() execution time — average amount of time to execute the Java UDTF endPartition method.

    • Max Java UDF dependency download time — maximum amount of time to download the Java UDF dependencies.

    • Max JVM memory usage — peak memory usage as reported by the JVM.

    • Java UDF inline code compile time in ms — compile time for the Java UDF inline code.

    • Total Python UDF handler invocations — number of times the Python UDF handler was invoked.

    • Total Python UDF handler execution time — total execution time for Python UDF handler.

    • Avg Python UDF handler execution time — average amount of time to execute the Python UDF handler.

    • Python sandbox max memory usage — peak memory usage by the Python sandbox environment.

    • Avg Python env creation time: Download and install packages — average amount of time to create the Python environment, including downloading and installing packages.

    • Conda solver time — amount of time to run the Conda solver to solve Python packages.

    • Conda env creation time — amount of time to create the Python environment.

    • Python UDF initialization time — amount of time to initialize the Python UDF.

    • Number of external file bytes read for UDFs — number of external file bytes read for UDFs.

    • Number of external files accessed for UDFs — number of external files accessed for UDFs.

    If the value of a field, for example “Retries due to transient errors”, is zero, then the field is not displayed.

Information Redacted from Query Profiles in a Snowflake Native App

The Snowflake Native App Framework redacts information from the query profile in the following contexts:

  • Queries that are run when the app is installed or upgraded.

  • Queries that originate from a stored procedure owned by the app.

  • Queries containing a non-secure view or function owned by the app.

For each of these types of queries, Snowsight collapses the query profile data into a single empty node instead of displaying the full query profile tree.

Common Query Problems Identified by Query Profile

This section describes some of the problems you can identify and troubleshoot using Query Profile.

“Exploding” Joins

One of the common mistakes SQL users make is joining tables without providing a join condition (resulting in a “Cartesian product”), or providing a condition where records from one table match multiple records from another table. For such queries, the Join operator produces significantly (often by orders of magnitude) more tuples than it consumes.

This can be observed by looking at the number of records produced by a Join operator, and typically is also reflected in Join operator consuming a lot of time.

The following example shows input in the hundreds of records but output in the hundreds of thousands:

SELECT tt1.c1, tt1.c2
FROM tt1
JOIN tt2 ON tt1.c1 = tt2.c1
 AND tt1.c2 = tt2.c2;
Copy
../_images/ui-profile-issues-exploding-joins.png

UNION Without ALL

In SQL, it is possible to combine two sets of data with either UNION or UNION ALL constructs. The difference between them is that UNION ALL simply concatenates inputs, while UNION does the same, but also performs duplicate elimination.

A common mistake is to use UNION when the UNION ALL semantics are sufficient. These queries show in Query Profile as a UnionAll operator with an extra Aggregate operator on top (which performs duplicate elimination).

Queries Too Large to Fit in Memory

For some operations (e.g. duplicate elimination for a huge data set), the amount of memory available for the compute resources used to execute the operation might not be sufficient to hold intermediate results. As a result, the query processing engine will start spilling the data to local disk. If the local disk space is not sufficient, the spilled data is then saved to remote disks.

This spilling can have a profound effect on query performance (especially if remote disk is used for spilling). To alleviate this, we recommend:

  • Using a larger warehouse (effectively increasing the available memory/local disk space for the operation), and/or

  • Processing data in smaller batches.

Inefficient Pruning

Snowflake collects rich statistics on data allowing it not to read unnecessary parts of a table based on the query filters. However, for this to have an effect, the data storage order needs to be correlated with the query filter attributes.

The efficiency of pruning can be observed by comparing Partitions scanned and Partitions total statistics in the TableScan operators. If the former is a small fraction of the latter, pruning is efficient. If not, the pruning did not have an effect.

Of course, pruning can only help for queries that actually filter out a significant amount of data. If the pruning statistics do not show data reduction, but there is a Filter operator above TableScan which filters out a number of records, this might signal that a different data organization might be beneficial for this query.

For more information about pruning, see Understanding Snowflake Table Structures.

Query Profile Data Redacted from a Snowflake Native App

The Snowflake Native App Framework redacts information from the query profile in the following contexts:

  • Queries that are run when the app is installed or upgraded.

  • Queries that originate from a stored procedure owned by the app.

  • Queries containing a non-secure view or function owned by the app.

For each of these types of queries, Snowsight collapses the query profile data into a single empty node instead of displaying the full query profile tree.