Account Usage¶
Snowflake provides data dictionary object metadata, as well as historical usage data, for your account via a shared database named SNOWFLAKE.
In this Topic:
What is the SNOWFLAKE Database?¶
SNOWFLAKE is a system-defined, read-only shared database, provided by Snowflake. The database is automatically imported into each account from a share named ACCOUNT_USAGE. The SNOWFLAKE database is an example of Snowflake utilizing Secure Data Sharing to provide object metadata and other usage metrics for your account.
The SNOWFLAKE database contains two schemas (also read-only). Each schema contains a set of views:
- ACCOUNT_USAGE
Views that display object metadata and usage metrics for your account.
In general, these views mirror the corresponding views and table functions in the Snowflake Information Schema, but with the following differences:
Records for dropped objects included in each view.
Longer retention time for historical usage data.
Data latency.
For more details, see Differences Between Account Usage and Information Schema (in this topic). For more details about each view, see ACCOUNT_USAGE Views (in this topic).
- READER_ACCOUNT_USAGE
Views that display object metadata and usage metrics for all the reader accounts that have been created for your account (as a Secure Data Sharing provider).
These views are a small subset of the ACCOUNT_USAGE views that apply to reader accounts, with the exception of the RESOURCE_MONITORS view, which is available only in READER_ACCOUNT_USAGE. Also, each view in this schema contains an additional
READER_ACCOUNT_NAME
column for filtering results by reader account.For more details about each view, see READER_ACCOUNT_USAGE Views (in this topic).
Note that these views are empty if no reader accounts have been created for your account.
The views in the schemas can be queried similar to any other views in Snowflake:
Note that the SNOWFLAKE database contains a third schema, INFORMATION_SCHEMA, which is automatically created in all databases. In shared databases, this schema doesn’t serve a purpose and can be disregarded.
Note
By default, only account administrators (users with the ACCOUNTADMIN role) can access the SNOWFLAKE database and schemas within the database, or perform queries on the views; however, privileges on the database can be granted to other roles in your account to allow other users to access the objects. For more details, see Enabling Account Usage for Other Roles (in this topic).
Differences Between Account Usage and Information Schema¶
The account usage views and the corresponding views (or table functions) in the Information Schema utilize identical structures and naming conventions, but with some key differences, as described in this section:
Difference |
Account Usage |
Information Schema |
---|---|---|
Includes dropped objects |
Yes |
No |
Latency of data |
From 45 minutes to 3 hours (varies by view) |
None |
Retention of historical data |
1 Year |
From 7 days to 6 months (varies by view/table function) |
For more details, see the following sections.
Dropped Object Records¶
Account usage views include records for all objects that have been dropped. An additional DELETED
column displays the timestamp when the object was dropped.
In addition, because objects can be dropped and recreated with the same name, to differentiate between objects records that have the same name, the account usage views include ID columns, where appropriate, that display the internal IDs generated and assigned to each record by the system.
Data Latency¶
Due to the process of extracting the data from Snowflake’s internal metadata store, the account usage views have some natural latency:
For most of the views, the latency is 2 hours (120 minutes).
For the remaining views, the latency varies between 45 minutes and 3 hours.
For details, see the list of views for each schema (in this topic). Also, note that these are all maximum time lengths; the actual latency for a given view when the view is queried may be less.
In contrast, views/table functions in the Information Schema do not have any latency.
Historical Data Retention¶
Certain account usage views provide historical usage metrics. The retention period for these views is 1 year (365 days).
In contrast, the corresponding views and table functions in the Information Schema have much shorter retention periods, ranging from 7 days to 6 months, depending on the view.
ACCOUNT_USAGE Views¶
The ACCOUNT_USAGE schema contains the following views:
View |
Type |
Latency [1] |
Notes |
---|---|---|---|
Historical |
3 hours |
Data retained for 1 year. |
|
Object |
90 minutes |
||
Historical |
2 hours |
Data retained for 1 year. |
|
Object |
3 hours |
||
Historical |
3 hours |
Data retained for 1 year. |
|
Historical |
2 hours |
Data retained for 1 year. |
|
Object |
2 hours |
||
Object |
2 hours |
||
Object |
2 hours |
||
Object |
2 hours |
||
Historical |
90 minutes |
Data retained for 1 year. |
|
Historical |
2 hours |
Data retained for 1 year. |
|
Historical |
3 hours |
Data retained for 1 year. |
|
Historical |
3 hours |
Data retained for 1 year. |
|
Historical |
3 hours |
Data retained for 1 year. |
|
Object |
2 hours |
||
Historical |
3 hours |
Data retained for 1 year. |
|
Historical |
45 minutes |
Data retained for 1 year. |
|
Object |
2 hours |
||
Historical |
3 hours |
Data retained for 1 year. |
|
Object |
2 hours |
||
Object |
2 hours |
||
Historical |
3 hours |
Data retained for 1 year. |
|
Object |
2 hours |
||
Historical |
3 hours |
Data retained for 1 year. |
|
Object |
2 hours |
||
Historical |
2 hours |
Data retained for 1 year. |
|
Historical |
2 hours |
Combined usage across all database tables and internal stages. Data retained for 1 year. |
|
Object |
90 minutes |
||
Object |
2 hours |
||
Object |
90 minutes |
||
Historical |
45 minutes |
||
Object |
2 hours |
||
Object |
90 minutes |
||
Historical |
3 hours |
Data retained for 1 year. |
|
Historical |
3 hours |
Data retained for 1 year. |
[1] All latency times are approximate; in some instances, the actual latency may be lower.
READER_ACCOUNT_USAGE Views¶
The READER_ACCOUNT_USAGE schema contains the following views:
View |
Type |
Latency [1] |
Notes |
---|---|---|---|
Historical |
2 hours |
Data retained for 1 year. |
|
Historical |
45 minutes |
Data retained for 1 year. |
|
Object |
2 hours |
||
Historical |
2 hours |
Combined usage across all database tables and internal stages. Data retained for 1 year. |
|
Historical |
3 hours |
Data retained for 1 year. |
[1] All latency times are approximate; in some instances, the actual latency may be lower.
Enabling Account Usage for Other Roles¶
By default, the SNOWFLAKE database is available only to the ACCOUNTADMIN role.
To enable other roles to access the database and schemas, and query the views, a user with the ACCOUNTADMIN role must grant the following data sharing privilege to the desired roles:
IMPORTED PRIVILEGES
For example:
USE ROLE ACCOUNTADMIN; GRANT IMPORTED PRIVILEGES ON DATABASE snowflake TO ROLE SYSADMIN; GRANT IMPORTED PRIVILEGES ON DATABASE snowflake TO ROLE customrole1; USE ROLE customrole1; SELECT * FROM snowflake.account_usage.databases;
Querying the Account Usage Views¶
This section provides examples of some typical/useful queries using the views in the ACCOUNT_USAGE schema.
Note
These examples assume the SNOWFLAKE database and the ACCOUNT_USAGE schema are in use for the current session. The examples also assume the ACCOUNTADMIN role (or a role granted IMPORTED PRIVILEGES on the database) is in use. If they are not in use, execute the following commands before running the queries in the examples:
USE ROLE ACCOUNTADMIN;
USE SCHEMA snowflake.account_usage;
Examples: User Login Metrics¶
Average number of seconds between failed login attempts by user (month-to-date):
select user_name, count(*) as failed_logins, avg(seconds_between_login_attempts) as average_seconds_between_login_attempts from ( select user_name, timediff(seconds, event_timestamp, lead(event_timestamp) over(partition by user_name order by event_timestamp)) as seconds_between_login_attempts from login_history where event_timestamp > date_trunc(month, current_date) and is_success = 'NO' ) group by 1 order by 3;
Failed logins by user (month-to-date):
select user_name, sum(iff(is_success = 'NO', 1, 0)) as failed_logins, count(*) as logins, sum(iff(is_success = 'NO', 1, 0)) / nullif(count(*), 0) as login_failure_rate from login_history where event_timestamp > date_trunc(month, current_date) group by 1 order by 4 desc;
Failed logins by user and connecting client (month-to-date):
select reported_client_type, user_name, sum(iff(is_success = 'NO', 1, 0)) as failed_logins, count(*) as logins, sum(iff(is_success = 'NO', 1, 0)) / nullif(count(*), 0) as login_failure_rate from login_history where event_timestamp > date_trunc(month, current_date) group by 1,2 order by 5 desc;
Examples: Warehouse Credit Usage¶
Credits used by each warehouse in your account (month-to-date):
select warehouse_name, sum(credits_used) as total_credits_used from warehouse_metering_history where start_time >= date_trunc(month, current_date) group by 1 order by 2 desc;
Credits used over time by each warehouse in your account (month-to-date):
select start_time::date as usage_date, warehouse_name, sum(credits_used) as total_credits_used from warehouse_metering_history where start_time >= date_trunc(month, current_date) group by 1,2 order by 2,1;
Examples: Data Storage Usage¶
Billable terabytes stored in your account over time:
select date_trunc(month, usage_date) as usage_month , avg(storage_bytes + stage_bytes + failsafe_bytes) / power(1024, 4) as billable_tb from storage_usage group by 1 order by 1;
Examples: User Query Totals and Execution Times¶
Total jobs executed in your account (month-to-date):
select count(*) as number_of_jobs from query_history where start_time >= date_trunc(month, current_date);
Total jobs executed by each warehouse in your account (month-to-date):
select warehouse_name, count(*) as number_of_jobs from query_history where start_time >= date_trunc(month, current_date) group by 1 order by 2 desc;
Average query execution time by user (month-to-date):
select user_name, avg(execution_time) as average_execution_time from query_history where start_time >= date_trunc(month, current_date) group by 1 order by 2 desc;
Average query execution time by query type and warehouse size (month-to-date):
select query_type, warehouse_size, avg(execution_time) as average_execution_time from query_history where start_time >= date_trunc(month, current_date) group by 1,2 order by 3 desc;
Examples: Obtain a Query Count for Every Login Event¶
Join columns from LOGIN_HISTORY, QUERY_HISTORY, and SESSIONS to obtain a query count for each user login event.
select l.user_name, l.event_timestamp as login_time, l.client_ip, l.reported_client_type, l.first_authentication_factor, l.second_authentication_factor, count(q.query_id) from snowflake.account_usage.login_history l join snowflake.account_usage.sessions s on l.event_id = s.login_event_id join snowflake.account_usage.query_history q on q.session_id = s.session_id group by 1,2,3,4,5,6 order by l.user_name ;