Account Usage¶
In the SNOWFLAKE database, the ACCOUNT_USAGE and READER_ACCOUNT_USAGE schemas enable querying object metadata, as well as historical usage data, for your account and all reader accounts (if any) associated with the account.
Overview of Account Usage schemas¶
- ACCOUNT_USAGE:
Views that display object metadata and usage metrics for your account.
In general, these views mirror the corresponding views and table functions in the Snowflake Snowflake Information Schema, but with the following differences:
Records for dropped objects included in each view.
Longer retention time for historical usage data.
Data latency.
For more details, see Differences Between Account Usage and Information Schema (in this topic). For more details about each view, see ACCOUNT_USAGE Views (in this topic).
- READER_ACCOUNT_USAGE:
Views that display object metadata and usage metrics for all the reader accounts that have been created for your account (as a Secure Data Sharing provider).
These views are a small subset of the ACCOUNT_USAGE views that apply to reader accounts, with the exception of the RESOURCE_MONITORS view, which is available only in READER_ACCOUNT_USAGE. Also, each view in this schema contains an additional
READER_ACCOUNT_NAME
column for filtering results by reader account.For more details about each view, see READER_ACCOUNT_USAGE Views (in this topic).
Note that these views are empty if no reader accounts have been created for your account.
Differences between Account Usage and Information Schema¶
The Account Usage views and the corresponding views (or table functions) in the Snowflake Information Schema utilize identical structures and naming conventions, but with some key differences, as described in this section:
Difference |
Account Usage |
Information Schema |
---|---|---|
Includes dropped objects |
Yes |
No |
Latency of data |
From 45 minutes to 3 hours (varies by view) |
None |
Retention of historical data |
1 Year |
From 7 days to 6 months (varies by view/table function) |
For more details, see the following sections.
Dropped object records¶
Account usage views include records for all objects that have been dropped. Many of the views for object types contain an
additional DELETED
column that displays the timestamp when the object was dropped.
In addition, because objects can be dropped and recreated with the same name, to differentiate between objects records that have the same name, the account usage views include ID columns, where appropriate, that display the internal IDs generated and assigned to each record by the system.
If a column for an object name (e.g. the TABLE_NAME
column) is NULL, that object has been dropped. In this case, the
columns for the names and IDs of the parent objects (e.g. the DATABASE_NAME
and SCHEMA_NAME
columns) are also
NULL.
Note that in some views, the column for the object name might still contain the name of the object, even if the object has been dropped.
Data latency¶
Due to the process of extracting the data from Snowflake’s internal metadata store, the account usage views have some natural latency:
For most of the views, the latency is 2 hours (120 minutes).
For the remaining views, the latency varies between 45 minutes and 3 hours.
For details, see the list of views for each schema (in this topic). Also, note that these are all maximum time lengths; the actual latency for a given view when the view is queried may be less.
In contrast, views/table functions in the Snowflake Information Schema do not have any latency.
Historical data retention¶
Certain account usage views provide historical usage metrics. The retention period for these views is 1 year (365 days).
In contrast, the corresponding views and table functions in the Snowflake Information Schema have much shorter retention periods, ranging from 7 days to 6 months, depending on the view.
ACCOUNT_USAGE views¶
The ACCOUNT_USAGE schema contains the following views:
View |
Type |
Latency [1] |
Edition [3] |
Notes |
---|---|---|---|---|
Historical |
3 hours |
Enterprise Edition (or higher) |
Data retained for 1 year. |
|
Historical |
3 hours |
Enterprise Edition (or higher) |
Data retained for 1 year. |
|
Historical |
3 hours |
|||
Object |
2 hours |
|||
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
3 hours |
Data retained for 1 year. |
||
Object |
3 hours |
Data retained for 1 year. |
||
Object |
3 hours |
Data retained for 1 year. |
||
Object |
90 minutes |
|||
Historical |
45 minutes |
Data retained for 1 year. |
||
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
2 hours [2] |
Data retained for 1 year. |
||
Historical |
1 hour |
Data retained for 1 year. |
||
Historical |
Data retained for 1 year. |
|||
Historical |
1 hour |
Data retained for 1 year. |
||
Object |
3 hours |
Enterprise Edition (or higher) |
Data retained for as long as the table exists. |
|
Object |
3 hours |
Enterprise Edition (or higher) |
||
Historical |
3 hours |
Enterprise Edition (or higher) |
Data retained for 1 year. |
|
Object |
3 hours |
|||
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
2 hours |
Data retained for 1 year. |
||
Historical |
Data retained for 1 year. |
|||
Historical |
3 hours |
Data retained for 1 year. |
||
Object |
90 minutes |
|||
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
2 hours |
Data retained for 1 year. |
||
Object |
90 minutes |
|||
Object |
2 hours |
|||
Object |
2 hours |
|||
Object |
2 hours |
|||
Object |
2 hours |
|||
Object |
3 hours |
|||
Historical |
3 hours |
Data retained for 1 year. |
||
Object |
3 hours |
|||
Object |
3 hours |
|||
Historical |
3 hours |
|||
Historical |
90 minutes [2] |
Data retained for 1 year. |
||
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
2 hours |
Data retained for 1 year. |
||
Object |
2 hours |
|||
Historical |
3 hours |
Enterprise Edition (or higher) |
Data retained for 1 year. |
|
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
3 hours |
Data retained for 1 year. |
||
Object |
2 hours |
|||
Object |
2 hours |
|||
Object |
2 hours |
|||
Historical |
3 hours |
|||
Object |
2 hours |
|||
Object |
2 hours |
|||
Historical |
3 hours |
Data retained for 1 year. |
||
Object |
2 hours |
|||
Object |
24 hours |
Enterprise Edition (or higher) |
||
Object |
2 hours |
Enterprise Edition (or higher) |
||
Object |
2 hours |
|||
Object |
2 hours |
|||
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
3 hours |
Enterprise Edition (or higher) |
Data retained for 1 year. |
|
Historical |
6 hours |
Data retained for 1 year. |
||
Historical |
45 minutes |
Data retained for 1 year. |
||
Object |
2 hours |
|||
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
3 hours |
Data retained for 1 year. |
||
Object |
2 hours |
|||
Object |
2 hours |
|||
Object |
2 hours |
|||
Historical |
6 hours |
Enterprise Edition (or higher) |
Data retained for 1 year. |
|
Historical |
3 hours |
Enterprise Edition (or higher) |
Data retained for 1 year. |
|
Object |
2 hours |
|||
Object |
2 hours |
|||
Historical |
3 hours |
Data retained for 1 year. |
||
Object |
3 hours |
Data retained for 1 year. |
||
Object |
2 hours |
|||
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
3 hour |
Data retained for 1 year. |
||
Historical |
2 hours |
Data retained for 1 year. |
||
Historical |
12 hours |
Data retained for 1 year. |
||
Object |
2 hours |
|||
Historical |
2 hours |
Data retained for 1 year. |
||
Historical |
2 hours |
Combined usage across all database tables and internal stages. Data retained for 1 year. |
||
Object |
90 minutes |
|||
Object |
2 hours |
|||
Historical |
6 hours |
Data retained for 1 year. |
||
Historical |
6 hours |
Data retained for 1 year. |
||
Object |
90 minutes |
|||
Object |
2 hours |
|||
Object |
2 hours |
|||
Historical |
45 minutes |
|||
Object |
3 hours |
|||
Object |
2 hours |
|||
Object |
90 minutes |
|||
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
3 hours |
Data retained for 1 year. |
||
Historical |
3 hours |
Data retained for 1 year. |
[1] All latency times are approximate; in some instances, the actual latency may be lower.
[2] The latency of the views for a given table may be up to 2 days if both of the following conditions are true: 1. Fewer than 32 DML statements have been added to the given table since it was last updated in LOAD_HISTORY or COPY_HISTORY. 2. Fewer than 100 rows have been added to the given table since it was last updated in LOAD_HISTORY or COPY_HISTORY.
[3] Unless otherwise noted, the Account Usage view is available to all accounts.
Account Usage table functions¶
Currently, Snowflake supports one ACCOUNT_USAGE table function:
Table Function |
Data Retention |
Notes |
---|---|---|
N/A |
Results are only returned for the role that has access to the specified object. |
Note
Similar to the Account Usage views, please account for latency when calling this table function. The expected latency for this table function is similar to the latency for the TAG_REFERENCES view.
READER_ACCOUNT_USAGE views¶
The READER_ACCOUNT_USAGE schema contains the following views:
View |
Type |
Latency [1] |
Notes |
---|---|---|---|
Historical |
2 hours |
Data retained for 1 year. |
|
Historical |
45 minutes |
Data retained for 1 year. |
|
Object |
2 hours |
||
Historical |
2 hours |
Combined usage across all database tables and internal stages. Data retained for 1 year. |
|
Historical |
3 hours |
Data retained for 1 year. |
[1] All latency times are approximate; in some instances, the actual latency may be lower.
Enabling other roles to use schemas in the SNOWFLAKE database¶
By default, the SNOWFLAKE database is visible to all users; however, access to schemas in this database can be granted by a user with the ACCOUNTADMIN role using either of the following approaches:
Grant IMPORTED PRIVILEGES on the SNOWFLAKE database.
Grant a SNOWFLAKE database role to an account role.
Important
Be careful when granting privileges to the SNOWFLAKE database within an account that has the ORGADMIN role enabled. Within that account, anyone with privileges to the SNOWFLAKE database can access the ORGANIZATION_USAGE schema.
To avoid unintentionally granting access to organization-level data, consider using SNOWFLAKE database roles to grant access to views in the ACCOUNT_USAGE schema.
For more information, refer to GRANT DATABASE ROLE.
For example, to grant IMPORTED PRIVILEGES on the SNOWFLAKE database to two additional roles:
USE ROLE ACCOUNTADMIN; GRANT IMPORTED PRIVILEGES ON DATABASE SNOWFLAKE TO ROLE SYSADMIN; GRANT IMPORTED PRIVILEGES ON DATABASE SNOWFLAKE TO ROLE customrole1;
A user with that is granted the customrole1
role can query a view as follows:
USE ROLE customrole1; SELECT database_name, database_owner FROM SNOWFLAKE.ACCOUNT_USAGE.DATABASES;
For additional examples, see Querying the Account Usage views.
ACCOUNT_USAGE schema SNOWFLAKE database roles¶
In addition, you can grant finer control to accounts using SNOWFLAKE Database roles. For more information on database roles, see database roles.
ACCOUNT_USAGE schemas have four defined SNOWFLAKE database roles, each granted the SELECT privilege on specific views.
Role |
Purpose and Description |
---|---|
OBJECT_VIEWER |
The OBJECT_VIEWER role provides visibility into object metadata. |
USAGE_VIEWER |
The USAGE_VIEWER role provides visibility into historical usage information. |
GOVERNANCE_VIEWER |
The GOVERNANCE_VIEWER role provides visibility into policy related information. |
SECURITY_VIEWER |
The SECURITY_VIEWER role provides visibility into security based information. |
ACCOUNT_USAGE Views by Database Role¶
The OBJECT_VIEWER, USAGE_VIEWER, GOVERNANCE_VIEWER, and SECURITY_VIEWER roles have the SELECT privilege to query Account Usage views in the shared SNOWFLAKE database.
A checkmark (i.e. ✔) indicates the role is granted the SELECT privilege on the view.
View |
OBJECT_VIEWER Role |
USAGE_VIEWER Role |
GOVERNANCE_VIEWER Role |
SECURITY_VIEWER Role |
---|---|---|---|---|
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
✔ |
|||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
✔ |
|||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
✔ |
|||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
✔ |
|||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
||||
✔ |
READER_ACCOUNT_USAGE schema SNOWFLAKE database roles¶
The READER_USAGE_VIEWER SNOWFLAKE database role is granted SELECT privilege on all READER_ACCOUNT_USAGE views. As reader accounts are created by clients, the READER_USAGE_VIEWER role is expected to be granted to those roles used to monitor reader account use.
View |
---|
Querying the Account Usage views¶
This section includes considerations when querying the Account Usage views along with query examples.
Selecting columns¶
The Snowflake-specific views are subject to change. Avoid selecting all columns from these views. Instead, select the columns that you want.
For example, if you want the name
column, use SELECT name
, rather than SELECT *
.
Reconciling cost views¶
There are several Account Usage views that contain data related to the cost of compute resources, storage, and data transfers. If you are trying to reconcile these views against a corresponding view in the ORGANIZATION_USAGE schema, you must first set the timezone of the session to UTC.
For example, if you are trying to reconcile ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY to the account’s data in ORGANIZATION_USAGE.WAREHOUSE_METERING_HISTORY, you must run the following command before querying the Account Usage view:
ALTER SESSION SET TIMEZONE = UTC;
Examples¶
The following examples show some typical/useful queries using the views in the ACCOUNT_USAGE schema.
Note
These examples assume the SNOWFLAKE database and the ACCOUNT_USAGE schema are in use for the current session. The examples also assume the ACCOUNTADMIN role (or a role granted IMPORTED PRIVILEGES on the database) is in use. If they are not in use, execute the following commands before running the queries in the examples:
USE ROLE ACCOUNTADMIN; USE SCHEMA snowflake.account_usage;
Examples: User login metrics¶
Average number of seconds between failed login attempts by user (month-to-date):
select user_name, count(*) as failed_logins, avg(seconds_between_login_attempts) as average_seconds_between_login_attempts from ( select user_name, timediff(seconds, event_timestamp, lead(event_timestamp) over(partition by user_name order by event_timestamp)) as seconds_between_login_attempts from login_history where event_timestamp > date_trunc(month, current_date) and is_success = 'NO' ) group by 1 order by 3;
Failed logins by user (month-to-date):
select user_name, sum(iff(is_success = 'NO', 1, 0)) as failed_logins, count(*) as logins, sum(iff(is_success = 'NO', 1, 0)) / nullif(count(*), 0) as login_failure_rate from login_history where event_timestamp > date_trunc(month, current_date) group by 1 order by 4 desc;
Failed logins by user and connecting client (month-to-date):
select reported_client_type, user_name, sum(iff(is_success = 'NO', 1, 0)) as failed_logins, count(*) as logins, sum(iff(is_success = 'NO', 1, 0)) / nullif(count(*), 0) as login_failure_rate from login_history where event_timestamp > date_trunc(month, current_date) group by 1,2 order by 5 desc;
Examples: Warehouse performance¶
This query calculates virtual warehouse performance metrics such as throughput and latency for 15-minute time intervals over the course of one day.
In the code sample below, you can replace CURRENT_WAREHOUSE()
with the name of a warehouse to calculate metrics for that warehouse. In
addition, change the time_from
and time_to
dates in the WITH clause to specify the time period.
WITH params AS ( SELECT CURRENT_WAREHOUSE() AS warehouse_name, '2021-11-01' AS time_from, '2021-11-02' AS time_to ), jobs AS ( SELECT query_id, time_slice(start_time::timestamp_ntz, 15, 'minute','start') as interval_start, qh.warehouse_name, database_name, query_type, total_elapsed_time, compilation_time AS compilation_and_scheduling_time, (queued_provisioning_time + queued_repair_time + queued_overload_time) AS queued_time, transaction_blocked_time, execution_time FROM snowflake.account_usage.query_history qh, params WHERE qh.warehouse_name = params.warehouse_name AND start_time >= params.time_from AND start_time <= params.time_to AND execution_status = 'SUCCESS' AND query_type IN ('SELECT','UPDATE','INSERT','MERGE','DELETE') ), interval_stats AS ( SELECT query_type, interval_start, COUNT(DISTINCT query_id) AS numjobs, MEDIAN(total_elapsed_time)/1000 AS p50_total_duration, (percentile_cont(0.95) within group (order by total_elapsed_time))/1000 AS p95_total_duration, SUM(total_elapsed_time)/1000 AS sum_total_duration, SUM(compilation_and_scheduling_time)/1000 AS sum_compilation_and_scheduling_time, SUM(queued_time)/1000 AS sum_queued_time, SUM(transaction_blocked_time)/1000 AS sum_transaction_blocked_time, SUM(execution_time)/1000 AS sum_execution_time, ROUND(sum_compilation_and_scheduling_time/sum_total_duration,2) AS compilation_and_scheduling_ratio, ROUND(sum_queued_time/sum_total_duration,2) AS queued_ratio, ROUND(sum_transaction_blocked_time/sum_total_duration,2) AS blocked_ratio, ROUND(sum_execution_time/sum_total_duration,2) AS execution_ratio, ROUND(sum_total_duration/numjobs,2) AS total_duration_perjob, ROUND(sum_compilation_and_scheduling_time/numjobs,2) AS compilation_and_scheduling_perjob, ROUND(sum_queued_time/numjobs,2) AS queued_perjob, ROUND(sum_transaction_blocked_time/numjobs,2) AS blocked_perjob, ROUND(sum_execution_time/numjobs,2) AS execution_perjob FROM jobs GROUP BY 1,2 ORDER BY 1,2 ) SELECT * FROM interval_stats;Note
Analyze different statement types separately (e.g., SELECT statements independent of INSERT or DELETE or other statements).
The NUMJOBS value represents the throughput for that time interval.
The P50_TOTAL_DURATION (median) and P95_TOTAL_DURATION (peak) values represent latency.
The SUM_TOTAL_DURATION is the sum of the SUM_<job_stage>_TIME values for the different job stages (COMPILATION_AND_SCHEDULING, QUEUED, BLOCKED, EXECUTION).
Analyze the <job_stage>_RATIO values when the load (NUMJOBS) increases. Look for ratio changes or deviations from the average.
If the QUEUED_RATIO is high, there might not be sufficient capacity in the warehouse. Add more clusters or increase the warehouse size.
Examples: Warehouse credit usage¶
Credits used by each warehouse in your account (month-to-date):
select warehouse_name, sum(credits_used) as total_credits_used from warehouse_metering_history where start_time >= date_trunc(month, current_date) group by 1 order by 2 desc;
Credits used over time by each warehouse in your account (month-to-date):
select start_time::date as usage_date, warehouse_name, sum(credits_used) as total_credits_used from warehouse_metering_history where start_time >= date_trunc(month, current_date) group by 1,2 order by 2,1;
Examples: Data storage usage¶
Billable terabytes stored in your account over time:
select date_trunc(month, usage_date) as usage_month , avg(storage_bytes + stage_bytes + failsafe_bytes) / power(1024, 4) as billable_tb from storage_usage group by 1 order by 1;
Examples: User query totals and execution times¶
Total jobs executed in your account (month-to-date):
select count(*) as number_of_jobs from query_history where start_time >= date_trunc(month, current_date);
Total jobs executed by each warehouse in your account (month-to-date):
select warehouse_name, count(*) as number_of_jobs from query_history where start_time >= date_trunc(month, current_date) group by 1 order by 2 desc;
Average query execution time by user (month-to-date):
select user_name, avg(execution_time) as average_execution_time from query_history where start_time >= date_trunc(month, current_date) group by 1 order by 2 desc;
Average query execution time by query type and warehouse size (month-to-date):
select query_type, warehouse_size, avg(execution_time) as average_execution_time from query_history where start_time >= date_trunc(month, current_date) group by 1,2 order by 3 desc;
Examples: Obtain a query count for every login event¶
Join columns from LOGIN_HISTORY, QUERY_HISTORY, and SESSIONS to obtain a query count for each user login event.
Note
The SESSIONS view records information starting on July 20-21, 2020, therefore the query result will only contain overlapping information for each of the three views starting from this date.
select l.user_name, l.event_timestamp as login_time, l.client_ip, l.reported_client_type, l.first_authentication_factor, l.second_authentication_factor, count(q.query_id) from snowflake.account_usage.login_history l join snowflake.account_usage.sessions s on l.event_id = s.login_event_id join snowflake.account_usage.query_history q on q.session_id = s.session_id group by 1,2,3,4,5,6 order by l.user_name ;