- Schemas:
QUERY_HISTORY view¶
This Account Usage view can be used to query Snowflake query history by various dimensions (time range, session, user, warehouse, etc.) within the last 365 days (1 year).
The view is available in both the ACCOUNT_USAGE and READER_ACCOUNT_USAGE schemas with the following differences:
The following columns are available only in the reader account view:
READER_ACCOUNT_NAME
READER_ACCOUNT_DELETED_ON
Alternatively, you can call the Information Schema table function, also named QUERY_HISTORY; however, note that the table function restricts the results to activity over the past 7 days, versus 365 days for the Account Usage view. See the description of the QUERY_HISTORY function.
See also:
Monitor query activity with Query History (Snowsight dashboard)
Columns¶
The Available only in reader account usage views column in the following table indicates whether the QUERY_HISTORY column is available in the READER_ACCOUNT_USAGE schema.
Column Name |
Data Type |
Description |
Available only in reader account usage views |
---|---|---|---|
|
VARCHAR |
Name of the reader account in which the SQL statement was executed. |
✔ |
|
VARCHAR |
Internal/system-generated identifier for the SQL statement. |
✔ |
|
VARCHAR |
Text of the SQL statement. Limit is 100K characters. SQL statements containing more than 100K characters will be truncated. |
|
|
NUMBER |
internal/system-generated identifier for the database that was in use. |
✔ |
|
VARCHAR |
database that was specified in the context of the query at compilation. |
✔ |
|
NUMBER |
Internal/system-generated identifier for the schema that was in use. |
✔ |
|
VARCHAR |
Schema that was specified in the context of the query at compilation. |
✔ |
|
VARCHAR |
DML, query, etc. If the query failed, then the query type may be UNKNOWN. |
|
|
NUMBER |
Session that executed the statement. |
✔ |
|
VARCHAR |
User who issued the query. |
|
|
VARCHAR |
Role that was active in the session at the time of the query. |
✔ |
|
NUMBER |
Internal/system-generated identifier for the warehouse that was used. |
✔ |
|
VARCHAR |
Warehouse that the query executed on, if any. |
✔ |
|
VARCHAR |
Size of the warehouse when this statement executed. |
✔ |
|
VARCHAR |
Type of the warehouse when this statement executed. |
✔ |
|
NUMBER |
The cluster (in a multi-cluster warehouse) that this statement executed on. |
✔ |
|
VARCHAR |
Query tag set for this statement through the QUERY_TAG session parameter. |
✔ |
|
VARCHAR |
Execution status for the query. Valid values: |
✔ |
|
NUMBER |
Error code, if the query returned an error |
✔ |
|
VARCHAR |
Error message, if the query returned an error |
✔ |
|
TIMESTAMP_LTZ |
Statement start time (in the local time zone) |
✔ |
|
TIMESTAMP_LTZ |
Statement end time (in the local time zone). |
✔ |
|
NUMBER |
Elapsed time (in milliseconds). |
✔ |
|
NUMBER |
Number of bytes scanned by this statement. |
✔ |
|
FLOAT |
percentage of data scanned from the local disk cache. The value ranges from 0.0 to 1.0. Multiply by 100 to get a true percentage. |
|
|
NUMBER |
Number of bytes written (e.g. when loading into a table). |
|
|
NUMBER |
Number of bytes written to a result object. For example, |
|
|
NUMBER |
Number of bytes read from a result object. |
|
|
NUMBER |
The number of rows produced by this statement. The ROWS_PRODUCED column will be deprecated in a future release. The value in the ROWS_PRODUCED column doesn’t always reflect the logical number of rows affected by a query. Snowflake recommends using the ROWS_INSERTED, ROWS_UPDATED, ROWS_WRITTEN_TO RESULTS, or ROWS_DELETED columns instead. |
✔ |
|
NUMBER |
Number of rows inserted by the query. |
|
|
NUMBER |
Number of rows updated by the query. |
|
|
NUMBER |
Number of rows deleted by the query. |
|
|
NUMBER |
Number of rows unloaded during data export. |
|
|
NUMBER |
Number of bytes deleted by the query. |
|
|
NUMBER |
Number of micro-partitions scanned. |
|
|
NUMBER |
Total micro-partitions of all tables included in this query. |
|
|
NUMBER |
Volume of data spilled to local disk. |
|
|
NUMBER |
Volume of data spilled to remote disk. |
|
|
NUMBER |
Volume of data sent over the network. |
|
|
NUMBER |
Compilation time (in milliseconds) |
✔ |
|
NUMBER |
Execution time (in milliseconds) |
✔ |
|
NUMBER |
Time (in milliseconds) spent in the warehouse queue, waiting for the warehouse compute resources to provision, due to warehouse creation, resume, or resize. |
✔ |
|
NUMBER |
Time (in milliseconds) spent in the warehouse queue, waiting for compute resources in the warehouse to be repaired. |
✔ |
|
NUMBER |
Time (in milliseconds) spent in the warehouse queue, due to the warehouse being overloaded by the current query workload. |
✔ |
|
NUMBER |
Time (in milliseconds) spent blocked by a concurrent DML. |
✔ |
|
VARCHAR |
Target cloud provider for statements that unload data to another region and/or cloud. |
✔ |
|
VARCHAR |
Target region for statements that unload data to another region and/or cloud. |
✔ |
|
NUMBER |
Number of bytes transferred in statements that unload data from Snowflake tables. |
✔ |
|
VARCHAR |
Source cloud provider for statements that load data from another region and/or cloud. |
✔ |
|
VARCHAR |
Source region for statements that load data from another region and/or cloud. |
✔ |
|
NUMBER |
Number of bytes transferred in a replication operation from another account. The source account could be in the same region or a different region than the current account. |
✔ |
|
NUMBER |
Time (in milliseconds) spent listing external files. |
|
|
NUMBER |
Number of credits used for cloud services. This value does not take into account the adjustment for cloud services, and may therefore be greater than the credits that are billed. To determine how many credits were actually billed, run queries against the METERING_DAILY_HISTORY view. |
✔ |
|
TIMESTAMP_LTZ |
Time and date (in the UTC time zone) when the reader account is deleted. |
✔ |
|
VARCHAR |
Release version in the format of |
|
|
NUMBER |
The aggregate number of times that this query called remote services. For important details, see the Usage Notes. |
|
|
NUMBER |
The total number of rows that this query sent in all calls to all remote services. |
|
|
NUMBER |
The total number of rows that this query received from all calls to all remote services. |
|
|
NUMBER |
The total number of bytes that this query sent in all calls to all remote services. |
|
|
NUMBER |
The total number of bytes that this query received from all calls to all remote services. |
|
|
NUMBER |
The approximate percentage of active compute resources in the warehouse for this query execution. |
|
|
BOOLEAN |
Indicates whether the query was client-generated. |
|
|
NUMBER |
Number of bytes scanned by the query acceleration service. |
|
|
NUMBER |
Number of partitions scanned by the query acceleration service. |
|
|
NUMBER |
Upper limit scale factor that a query would have benefited from. |
|
|
NUMBER |
ID of the transaction that contains the statement or 0 if the statement is not executed within a transaction. |
|
|
NUMBER |
Time (in milliseconds) to complete the cached lookup when calling a memoizable function. |
|
|
VARCHAR |
Specifies whether an APPLICATION, DATABASE_ROLE, or ROLE that executed the query. |
|
|
VARCHAR |
The hash value computed based on the canonicalized SQL text. |
|
|
NUMBER |
The version of the logic used to compute |
|
|
VARCHAR |
The hash value computed based on the parameterized query. |
|
|
NUMBER |
The version of the logic used to compute |
|
|
VARCHAR |
A JSON-formatted string that contains three fields regarding secondary roles that were evaluated in the query: a list of secondary roles or |
|
|
NUMBER |
Number of rows written to a result object. For CREATE TABLE AS SELECT (CTAS) and all DML operations, this result is |
|
|
NUMBER |
Total execution time (in milliseconds) for query retries caused by actionable errors. For more information, see Query retry columns. |
|
|
VARIANT |
Array of error messages for actionable errors. The array contains one error message for each query retry. If there is no query retry, the array is empty. For more information, see Query retry columns. |
|
|
NUMBER |
Total execution time (in milliseconds) for query retries caused by errors that are not actionable. For more information, see Query retry columns. |
Usage notes¶
General¶
Latency for the view may be up to 45 minutes.
The values for the columns EXTERNAL_FUNCTION_TOTAL_INVOCATIONS, EXTERNAL_FUNCTION_TOTAL_SENT_ROWS, EXTERNAL_FUNCTION_TOTAL_RECEIVED_ROWS, EXTERNAL_FUNCTION_TOTAL_SENT_BYTES, and EXTERNAL_FUNCTION_TOTAL_RECEIVED_BYTES are affected by many factors, including:
The number of external functions in the SQL statement.
The number of rows per batch sent to each remote service.
The number of retries due to transient errors (e.g. because a response was not received within the expected time).
Query retry columns¶
A query might need to be retried one or more times in order to successfully complete. There can be multiple causes that result in a query retry. Some of these causes are actionable, that is, a user can make changes to reduce or eliminate query retries for a specific query. For example, if a query is retried due to an out of memory error, modifying warehouse settings might resolve the issue.
Some query retries are caused by a fault that is not actionable. That is, there is no change a user can make to prevent the query retry. For example, a network outage might result in a query retry. In this case, there is no change to the query or to the warehouse that executes it that can prevent the query retry.
The QUERY_RETRY_TIME, QUERY_RETRY_CAUSE, and FAULT_HANDLING_TIME columns can help you optimize queries that are retried and better understand fluctuations in query performance.
Hybrid tables¶
The following guidance pertains to running queries against hybrid tables while using QUERY_HISTORY views.
Short-running queries that operate exclusively against hybrid tables will no longer generate a record in this view or QUERY_HISTORY (Information Schema table function). To monitor such queries, use the AGGREGATE_QUERY_HISTORY. This view allows you to more easily monitor high-throughput operational workloads for trends and issues.
Short-running queries that operate exclusively against hybrid tables also will no longer provide a query profile that you can inspect in Snowsight.
Queries against hybrid tables generate a record in the QUERY_HISTORY view and a query profile if any of the following conditions are met:
A query is executed against any table type other than the hybrid table type. This condition ensures that there is no behavior change for any existing non-Unistore workloads.
A query fails with an EXECUTION_STATUS of
failed_with_incident
(see QUERY_HISTORY). This condition ensures that you can investigate and report the specific failed query to receive assistance.A query is running longer than approximately 500 milliseconds. This condition ensures that you can investigate performance issues for slow queries.
Query result size is too large.
A query is associated with a Snowflake transaction.
A query contains a system function with side effects.
A query is not one of the following statement types: SELECT, INSERT, DELETE, UPDATE, MERGE.
A query is executed from SnowSQL, Snowsight, or Classic Console. This condition ensures that you can manually generate a full query profile to investigate performance issues for any specific query even if it is not categorized as long-running.
Even if a query does not meet any criterion above, queries can also be periodically sampled to generate a record in the QUERY_HISTORY view and a query profile to help your investigation.
PUT and GET commands¶
For the PUT and GET commands,
an EXECUTION_STATUS of success
in the QUERY_HISTORY
does not mean that data files were successfully uploaded or downloaded.
Instead, the status indicates that Snowflake received authorization to proceed with the file transfer.