Schemas:

ACCOUNT_USAGE , READER_ACCOUNT_USAGE

QUERY_HISTORY view¶

This Account Usage view can be used to query Snowflake query history by various dimensions (time range, session, user, warehouse, etc.) within the last 365 days (1 year).

The view is available in both the ACCOUNT_USAGE and READER_ACCOUNT_USAGE schemas with the following differences:

  • The following columns are available only in the reader account view:

    • READER_ACCOUNT_NAME

    • READER_ACCOUNT_DELETED_ON

Alternatively, you can call the Information Schema table function, also named QUERY_HISTORY; however, note that the table function restricts the results to activity over the past 7 days, versus 365 days for the Account Usage view. See the description of the QUERY_HISTORY function.

See also:

Columns¶

The Available only in reader account usage views column in the following table indicates whether the QUERY_HISTORY column is available in the READER_ACCOUNT_USAGE schema.

Column Name

Data Type

Description

Available only in reader account usage views

reader_account_name

VARCHAR

Name of the reader account in which the SQL statement was executed.

✔

query_id

VARCHAR

Internal/system-generated identifier for the SQL statement.

✔

query_text

VARCHAR

Text of the SQL statement. Limit is 100K characters. SQL statements containing more than 100K characters will be truncated.

database_id

NUMBER

internal/system-generated identifier for the database that was in use.

✔

database_name

VARCHAR

database that was specified in the context of the query at compilation.

✔

schema_id

NUMBER

Internal/system-generated identifier for the schema that was in use.

✔

schema_name

VARCHAR

Schema that was specified in the context of the query at compilation.

✔

query_type

VARCHAR

DML, query, etc. If the query failed, then the query type may be UNKNOWN.

session_id

NUMBER

Session that executed the statement.

✔

user_name

VARCHAR

User who issued the query.

role_name

VARCHAR

Role that was active in the session at the time of the query.

✔

warehouse_id

NUMBER

Internal/system-generated identifier for the warehouse that was used.

✔

warehouse_name

VARCHAR

Warehouse that the query executed on, if any.

✔

warehouse_size

VARCHAR

Size of the warehouse when this statement executed.

✔

warehouse_type

VARCHAR

Type of the warehouse when this statement executed.

✔

cluster_number

NUMBER

The cluster (in a multi-cluster warehouse) that this statement executed on.

✔

query_tag

VARCHAR

Query tag set for this statement through the QUERY_TAG session parameter.

✔

execution_status

VARCHAR

Execution status for the query. Valid values: success, fail, incident.

✔

error_code

NUMBER

Error code, if the query returned an error

✔

error_message

VARCHAR

Error message, if the query returned an error

✔

start_time

TIMESTAMP_LTZ

Statement start time (in the local time zone)

✔

end_time

TIMESTAMP_LTZ

Statement end time (in the local time zone).

✔

total_elapsed_time

NUMBER

Elapsed time (in milliseconds).

✔

bytes_scanned

NUMBER

Number of bytes scanned by this statement.

✔

percentage_scanned_from_cache

FLOAT

percentage of data scanned from the local disk cache. The value ranges from 0.0 to 1.0. Multiply by 100 to get a true percentage.

bytes_written

NUMBER

Number of bytes written (e.g. when loading into a table).

bytes_written_to_result

NUMBER

Number of bytes written to a result object. For example, select * from . . . would produce a set of results in tabular format representing each field in the selection. . . In general, the results object represents whatever is produced as a result of the query, and BYTES_WRITTEN_TO_RESULT represents the size of the returned result.

bytes_read_from_result

NUMBER

Number of bytes read from a result object.

rows_produced

NUMBER

The number of rows produced by this statement. The ROWS_PRODUCED column will be deprecated in a future release. The value in the ROWS_PRODUCED column doesn’t always reflect the logical number of rows affected by a query. Snowflake recommends using the ROWS_INSERTED, ROWS_UPDATED, ROWS_WRITTEN_TO RESULTS, or ROWS_DELETED columns instead.

✔

rows_inserted

NUMBER

Number of rows inserted by the query.

rows_updated

NUMBER

Number of rows updated by the query.

rows_deleted

NUMBER

Number of rows deleted by the query.

rows_unloaded

NUMBER

Number of rows unloaded during data export.

bytes_deleted

NUMBER

Number of bytes deleted by the query.

partitions_scanned

NUMBER

Number of micro-partitions scanned.

partitions_total

NUMBER

Total micro-partitions of all tables included in this query.

bytes_spilled_to_local_storage

NUMBER

Volume of data spilled to local disk.

bytes_spilled_to_remote_storage

NUMBER

Volume of data spilled to remote disk.

bytes_sent_over_the_network

NUMBER

Volume of data sent over the network.

compilation_time

NUMBER

Compilation time (in milliseconds)

✔

execution_time

NUMBER

Execution time (in milliseconds)

✔

queued_provisioning_time

NUMBER

Time (in milliseconds) spent in the warehouse queue, waiting for the warehouse compute resources to provision, due to warehouse creation, resume, or resize.

✔

queued_repair_time

NUMBER

Time (in milliseconds) spent in the warehouse queue, waiting for compute resources in the warehouse to be repaired.

✔

queued_overload_time

NUMBER

Time (in milliseconds) spent in the warehouse queue, due to the warehouse being overloaded by the current query workload.

✔

transaction_blocked_time

NUMBER

Time (in milliseconds) spent blocked by a concurrent DML.

✔

outbound_data_transfer_cloud

VARCHAR

Target cloud provider for statements that unload data to another region and/or cloud.

✔

outbound_data_transfer_region

VARCHAR

Target region for statements that unload data to another region and/or cloud.

✔

outbound_data_transfer_bytes

NUMBER

Number of bytes transferred in statements that unload data from Snowflake tables.

✔

inbound_data_transfer_cloud

VARCHAR

Source cloud provider for statements that load data from another region and/or cloud.

✔

inbound_data_transfer_region

VARCHAR

Source region for statements that load data from another region and/or cloud.

✔

inbound_data_transfer_bytes

NUMBER

Number of bytes transferred in a replication operation from another account. The source account could be in the same region or a different region than the current account.

✔

list_external_files_time

NUMBER

Time (in milliseconds) spent listing external files.

credits_used_cloud_services

NUMBER

Number of credits used for cloud services. This value does not take into account the adjustment for cloud services, and may therefore be greater than the credits that are billed. To determine how many credits were actually billed, run queries against the METERING_DAILY_HISTORY view.

✔

reader_account_deleted_on

TIMESTAMP_LTZ

Time and date (in the UTC time zone) when the reader account is deleted.

✔

release_version

VARCHAR

Release version in the format of major_release.minor_release.patch_release.

external_function_total_invocations

NUMBER

The aggregate number of times that this query called remote services. For important details, see the Usage Notes.

external_function_total_sent_rows

NUMBER

The total number of rows that this query sent in all calls to all remote services.

external_function_total_received_rows

NUMBER

The total number of rows that this query received from all calls to all remote services.

external_function_total_sent_bytes

NUMBER

The total number of bytes that this query sent in all calls to all remote services.

external_function_total_received_bytes

NUMBER

The total number of bytes that this query received from all calls to all remote services.

query_load_percent

NUMBER

The approximate percentage of active compute resources in the warehouse for this query execution.

is_client_generated_statement

BOOLEAN

Indicates whether the query was client-generated.

query_acceleration_bytes_scanned

NUMBER

Number of bytes scanned by the query acceleration service.

query_acceleration_partitions_scanned

NUMBER

Number of partitions scanned by the query acceleration service.

query_acceleration_upper_limit_scale_factor

NUMBER

Upper limit scale factor that a query would have benefited from.

transaction_id

NUMBER

ID of the transaction that contains the statement or 0 if the statement is not executed within a transaction.

child_queries_wait_time

NUMBER

Time (in milliseconds) to complete the cached lookup when calling a memoizable function.

role_type

VARCHAR

Specifies whether an APPLICATION, DATABASE_ROLE, or ROLE that executed the query.

query_hash

VARCHAR

The hash value computed based on the canonicalized SQL text.

query_hash_version

NUMBER

The version of the logic used to compute QUERY_HASH.

query_parameterized_hash

VARCHAR

The hash value computed based on the parameterized query.

query_parameterized_hash_version

NUMBER

The version of the logic used to compute QUERY_PARAMETERIZED_HASH.

secondary_role_stats

VARCHAR

A JSON-formatted string that contains three fields regarding secondary roles that were evaluated in the query: a list of secondary roles or ALL depending on the session, a count of the number of secondary roles, and the internal/system-generated ID for each secondary role. The count and number of IDs have a maximum of 50.

rows_written_to_result

NUMBER

Number of rows written to a result object. For CREATE TABLE AS SELECT (CTAS) and all DML operations, this result is 1.

query_retry_time

NUMBER

Total execution time (in milliseconds) for query retries caused by actionable errors. For more information, see Query retry columns.

query_retry_cause

VARIANT

Array of error messages for actionable errors. The array contains one error message for each query retry. If there is no query retry, the array is empty. For more information, see Query retry columns.

fault_handling_time

NUMBER

Total execution time (in milliseconds) for query retries caused by errors that are not actionable. For more information, see Query retry columns.

Usage notes¶

General¶

  • Latency for the view may be up to 45 minutes.

  • The values for the columns EXTERNAL_FUNCTION_TOTAL_INVOCATIONS, EXTERNAL_FUNCTION_TOTAL_SENT_ROWS, EXTERNAL_FUNCTION_TOTAL_RECEIVED_ROWS, EXTERNAL_FUNCTION_TOTAL_SENT_BYTES, and EXTERNAL_FUNCTION_TOTAL_RECEIVED_BYTES are affected by many factors, including:

    • The number of external functions in the SQL statement.

    • The number of rows per batch sent to each remote service.

    • The number of retries due to transient errors (e.g. because a response was not received within the expected time).

Query retry columns¶

A query might need to be retried one or more times in order to successfully complete. There can be multiple causes that result in a query retry. Some of these causes are actionable, that is, a user can make changes to reduce or eliminate query retries for a specific query. For example, if a query is retried due to an out of memory error, modifying warehouse settings might resolve the issue.

Some query retries are caused by a fault that is not actionable. That is, there is no change a user can make to prevent the query retry. For example, a network outage might result in a query retry. In this case, there is no change to the query or to the warehouse that executes it that can prevent the query retry.

The QUERY_RETRY_TIME, QUERY_RETRY_CAUSE, and FAULT_HANDLING_TIME columns can help you optimize queries that are retried and better understand fluctuations in query performance.

Hybrid tables¶

The following guidance pertains to running queries against hybrid tables while using QUERY_HISTORY views.

  • Short-running queries that operate exclusively against hybrid tables will no longer generate a record in this view or QUERY_HISTORY (Information Schema table function). To monitor such queries, use the AGGREGATE_QUERY_HISTORY. This view allows you to more easily monitor high-throughput operational workloads for trends and issues.

  • Short-running queries that operate exclusively against hybrid tables also will no longer provide a query profile that you can inspect in Snowsight.

  • Queries against hybrid tables generate a record in the QUERY_HISTORY view and a query profile if any of the following conditions are met:

    • A query is executed against any table type other than the hybrid table type. This condition ensures that there is no behavior change for any existing non-Unistore workloads.

    • A query fails with an EXECUTION_STATUS of failed_with_incident (see QUERY_HISTORY). This condition ensures that you can investigate and report the specific failed query to receive assistance.

    • A query is running longer than approximately 500 milliseconds. This condition ensures that you can investigate performance issues for slow queries.

    • Query result size is too large.

    • A query is associated with a Snowflake transaction.

    • A query contains a system function with side effects.

    • A query is not one of the following statement types: SELECT, INSERT, DELETE, UPDATE, MERGE.

    • A query is executed from SnowSQL, Snowsight, or Classic Console. This condition ensures that you can manually generate a full query profile to investigate performance issues for any specific query even if it is not categorized as long-running.

    • Even if a query does not meet any criterion above, queries can also be periodically sampled to generate a record in the QUERY_HISTORY view and a query profile to help your investigation.

PUT and GET commands¶

For the PUT and GET commands, an EXECUTION_STATUS of success in the QUERY_HISTORY does not mean that data files were successfully uploaded or downloaded. Instead, the status indicates that Snowflake received authorization to proceed with the file transfer.