Parameters¶
Snowflake provides parameters that let you control the behavior of your account, individual user sessions, and objects. All the parameters have default values, which can be set and then overridden at different levels depending on the parameter type (Account, Session, or Object).
In this Topic:
Parameter Hierarchy and Types¶
This section describes the different types of parameters (Account, Session, and Object) and the levels at which each type can be set.
The following diagram illustrates the hierarchical relationship between the different parameter types and how individual parameters can be overridden at each level:
Account Parameters¶
Account parameters can be set only at the account level by users with the appropriate administrator role. Account parameters are set using the ALTER ACCOUNT command.
Snowflake provides the following account parameters:
Parameter |
Notes |
---|---|
Used to enable connection caching in browser-based single sign-on (SSO) for Snowflake-provided clients |
|
Used for encryption of files staged for data loading or unloading; might require additional installation and configuration (see description for details). |
|
This is the only account parameter that can be set by either account administrators (i.e users with the ACCOUNTADMIN system role) or security administrators (i.e users with the SECURITYADMIN system role). |
|
Note
By default, account parameters are not displayed in the output of SHOW PARAMETERS. For more information about viewing account parameters, see Viewing the Parameters and Their Values (in this topic).
Session Parameters¶
Most parameters are session parameters, which can be set at the following levels:
- Account
Account administrators can use the ALTER ACCOUNT command to set session parameters for the account. The values set for the account default to individual users and their sessions.
- User
Administrators with the appropriate privileges (typically SECURITYADMIN role) can use the ALTER USER command to override session parameters for individual users. The values set for a user default to any sessions started by the user. In addition, users can override default sessions parameters for themselves using ALTER USER.
- Session
Users can use the ALTER SESSION to explicitly set session parameters within their sessions.
Note
By default, only session parameters are displayed in the output of SHOW PARAMETERS. For more information about viewing account and object parameters, see Viewing the Parameters and Their Values (in this topic).
Object Parameters¶
Object parameters can be set at the following levels:
- Account
Account administrators can use the ALTER ACCOUNT command to set object parameters for the account. The values set for the account default to the objects created in the account.
- Object
Users with the appropriate privileges can use the corresponding CREATE <object> or ALTER <object> commands to override object parameters for an individual object.
Snowflake provides the following object parameters:
Parameter |
Object Type |
Notes |
---|---|---|
Account |
||
Database, Schema, Table |
||
Database, Schema, Table |
||
Warehouse |
||
Database, Schema, Table |
||
Schema, Pipe |
||
Warehouse |
Also a session parameter (i.e. can be set at both the object and session levels). For inheritance and override details, see the parameter description. |
|
Warehouse |
Also a session parameter (i.e. can be set at both the object and session levels). For inheritance and override details, see the parameter description. |
|
User |
This is the only account parameter that can be set by either account administrators (i.e users with the ACCOUNTADMIN system role) or security administrators (i.e users with the SECURITYADMIN system role). |
|
Share |
This parameter can be set by either account administrators (i.e users with the ACCOUNTADMIN system role) or a role with the OVERRIDE SHARE RESTRICTIONS privilege. |
Note
By default, object parameters are not displayed in the output of SHOW PARAMETERS. For more information about viewing object parameters, see Viewing the Parameters and Their Values (in this topic).
Viewing the Parameters and Their Values¶
Snowflake provides the SHOW PARAMETERS command, which displays a list of the parameters, along with the current and default values for each parameter. The command can be called with different options to determine the type of parameter displayed.
Viewing Session and Object Parameters¶
By default, the command displays only session parameters:
SHOW PARAMETERS;
To display the object parameters for a specific object, include an IN
clause with the object type and name. For example:
SHOW PARAMETERS IN DATABASE mydb; SHOW PARAMETERS IN WAREHOUSE mywh;
Viewing All Parameters¶
To display all parameters, including account and object parameters, include an IN ACCOUNT
clause:
SHOW PARAMETERS IN ACCOUNT;
Limiting the List of Parameters by Name¶
The command also supports using a LIKE
clause to limit the list of parameters by name. For example:
To display the session parameters whose names contain “time”:
SHOW PARAMETERS LIKE '%time%';
To display all the parameters whose names start with “time”:
SHOW PARAMETERS LIKE 'time%' IN ACCOUNT;
Note
The LIKE
clause must come before the IN clause.
ABORT_DETACHED_QUERY¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Boolean
- Description
Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption).
- Values
TRUE
: In-progress queries are aborted 5 minutes after connectivity is lost.FALSE
: In-progress queries are completed.- Default
FALSE
Note
If the user explicitly closes the connection, then all in-progress queries are aborted immediately regardless of the parameter value.
Most queries require compute resources to execute. These resources are provided by virtual warehouses, which consume credits while running. With a value of
FALSE
, if the session terminates, warehouses might continue running and consuming credits to complete any queries that were in progress at the time the session terminated.
ALLOW_CLIENT_MFA_CACHING¶
- Type
Object — Can only be set for Account
- Data Type
Boolean
- Description
Specifies whether an MFA token can be saved in the client-side operating system keystore to promote continuous, secure connectivity without users needing to respond to an MFA prompt at the start of each connection attempt to Snowflake. For details and the list of supported Snowflake-provided clients, see Using MFA Token Caching to Minimize the Number of Prompts During Authentication — Optional.
- Values
TRUE
: Stores an MFA token in the client-side operating system keystore to enable the client application to use the MFA token whenever a new connection is established. While true, users are not prompted to respond to additional MFA prompts.FALSE
: Does not store an MFA token. Users must respond to an MFA prompt whenever the client application establishes a new connection with Snowflake.- Default
FALSE
ALLOW_ID_TOKEN¶
- Type
Account — Can be set only for Account
- Data Type
Boolean
- Description
Specifies whether a connection token can be saved in the client-side operating system keystore to promote continuous, secure connectivity without users needing to enter login credentials at the start of each connection attempt to Snowflake. For details and the list of supported Snowflake-provided clients, see Using Connection Caching to Minimize the Number of Prompts for Authentication — Optional.
- Values
TRUE
: Stores a connection token in the client-side operating system keystore to enable the client application to perform browser-based SSO without prompting users to authenticate whenever a new connection is established.FALSE
: Does not store a connection token. Users are prompted to authenticate whenever the client application establishes a new connection with Snowflake. SSO to Snowflake is still possible if this parameter is set to false.- Default
FALSE
AUTOCOMMIT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Boolean
- Description
Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions.
- Values
TRUE
: Autocommit is enabled.FALSE
: Autocommit is disabled, meaning DML statements must be explicitly committed or rolled back.- Default
TRUE
AUTOCOMMIT_API_SUPPORTED (View-only)¶
- Type
N/A
- Data Type
Boolean
- Description
For Snowflake internal use only. View-only parameter that indicates whether API support for autocommit is enabled for your account. If the value is
TRUE
, you can enable or disable autocommit through the APIs for the following drivers/connectors:
BINARY_INPUT_FORMAT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String (Constant)
- Description
Input format for binary values.
- Values
HEX
,BASE64
, orUTF8
/UTF-8
- Default
HEX
BINARY_OUTPUT_FORMAT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String (Constant)
- Description
Display format for binary values.
- Values
HEX
orBASE64
- Default
HEX
CLIENT_ENABLE_LOG_INFO_STATEMENT_PARAMETERS¶
- Type
Session — Can be set only for Session
- Data Type
Boolean
- Clients
JDBC
- Description
Enables users to log the data values bound to PreparedStatements.
To see the values, you must not only set this session-level parameter to
TRUE
, but also set the connection parameter namedTRACING
to eitherINFO
orALL
.Set
TRACING
toALL
to see all debugging information and all binding information.Set
TRACING
toINFO
to see the binding parameter values and less other debug information.
Caution
If you bind confidential information, such as medical diagnoses or passwords, that information is logged. Snowflake recommends making sure that the log file is secure, or only using test data, when you set this parameter to
TRUE
.- Values
TRUE
orFALSE
.- Default
FALSE
CLIENT_ENCRYPTION_KEY_SIZE¶
- Type
Account — Can be set only for Account
- Data Type
Integer
- Clients
Any
- Description
Specifies the AES encryption key size, in bits, used by Snowflake to encrypt/decrypt files stored in internal stages (for loading/unloading data).
- Values
128
or256
- Default
128
Note
This parameter is not used for encrypting/decrypting files stored in external stages (i.e. S3 buckets or Azure containers). Encryption/decryption of these files is accomplished using an external encryption key explicitly specified in the COPY command or in the named external stage referenced in the command.
If you are using the JDBC driver and you wish to set this parameter to 256 (for strong encryption), additional JCE policy files must be installed on each client machine from which data is loaded/unloaded. For more information about installing the required files, see Java Requirements for the JDBC Driver.
If you are using the Python connector (or SnowSQL) and you wish to set this parameter to 256 (for strong encryption), no additional installation or configuration tasks are required.
CLIENT_MEMORY_LIMIT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Integer
- Clients
JDBC, ODBC
- Description
Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB).
For the JDBC driver:
To simplify JVM memory management, the parameter sets a global maximum memory usage limit for all queries.
CLIENT_RESULT_CHUNK_SIZE specifies the maximum size of each set (or chunk) of query results to download (in MB). The driver might require additional memory to process a chunk; if so, it will adjust memory usage during runtime to process at least one thread/query. Verify that CLIENT_MEMORY_LIMIT is set significantly higher than CLIENT_RESULT_CHUNK_SIZE to ensure sufficient memory is available.
For the ODBC driver:
This parameter is supported in version 2.22.0 and higher.
CLIENT_RESULT_CHUNK_SIZE
is not supported.
Note
The driver will attempt to honor the parameter value, but will cap usage at 80% of your system memory.
The memory usage limit set in this parameter does not apply to any other JDBC or ODBC driver operations (e.g. connecting to the database, preparing a query, or PUT and GET statements).
- Values
Any valid number of megabytes.
- Default
1536
(effectively 1.5 GB)Most users should not need to set this parameter. If this parameter is not set by the user, the driver starts with the default specified above.
In addition, the JDBC driver actively manages its memory conservatively to avoid using up all available memory.
CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX¶
- Type
Session — Can be set for User » Session
- Data Type
Boolean
- Clients
JDBC, ODBC
- Description
For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly.
For example, the
getTables()
JDBC method accepts a database name and schema name as arguments, and returns the names of the tables in the database and schema. If the database and schema arguments arenull
, then by default, the method searches all databases and all schemas in the account. Setting CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX toTRUE
narrows the search to the current database and schema specified by the connection context.In essence, setting this parameter to
TRUE
creates the following precedence for database and schema:Values passed as arguments to the functions/methods.
Values specified in the connection context (if any).
Default (all databases and all schemas).
For more details, see below.
This parameter applies to the following:
JDBC driver methods (for the
DatabaseMetaData
class):getColumns
getCrossReference
getExportedKeys
getForeignKeys
getFunctions
getImportedKeys
getPrimaryKeys
getSchemas
getTables
ODBC driver functions:
SQLTables
SQLColumns
SQLPrimaryKeys
SQLForeignKeys
SQLGetFunctions
SQLProcedures
- Values
TRUE
: If the database and schema arguments arenull
, then the driver retrieves metadata for only the database and schema specified by the connection context.The interaction is described in more detail in the table below.
FALSE
: If the database and schema arguments arenull
, then the driver retrieves metadata for all databases and schemas in the account.- Default
FALSE
- Additional Notes
The connection context refers to the current database and schema for the session, which can be set using any of the following options:
Specify the default namespace for the user who connects to Snowflake (and initiates the session). This can be set for the user through the CREATE USER or ALTER USER command, but must be set before the user connects.
Specify the database and schema when connecting to Snowflake through the driver.
Issue a USE DATABASE or USE SCHEMA command within the session.
If the database or schema was specified by more than one of these, then the most recent one applies.
When CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX is set to
TRUE
:database argument
schema argument
Database used
Schema used
Non-null
Non-null
Argument
Argument
Non-null
Null
Argument
All schemas
Null
Non-null
Connection context
Argument
Null
Null
Connection context
Session context
Note
For the JDBC driver, this behavior applies to version 3.6.27 (and higher). For the ODBC driver, this behavior applies to version 2.12.96 (and higher).
If you want to search only the connection context database, but want to search all schemas within that database, see CLIENT_METADATA_USE_SESSION_DATABASE.
CLIENT_METADATA_USE_SESSION_DATABASE¶
- Type
Session — Can be set for Session
- Data Type
Boolean
- Clients
JDBC
- Description
This parameter applies to only the methods affected by CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX.
This parameter applies only when both of the following conditions are met:
CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX is
FALSE
or unset.No database or schema is passed to the relevant ODBC function or JDBC method.
For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases to the current database. The narrower search typically returns fewer rows and executes more quickly.
For more details, see below.
- Values
TRUE
:The driver searches all schemas in the connection context’s database. (For more details about the connection context, see the documentation for CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX.)
FALSE
:The driver searches all schemas in all databases.
- Default
FALSE
- Additional Notes
When the database is null
and the schema is null
and CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX is FALSE:
CLIENT_METADATA_USE_SESSION_DATABASE
Behavior
FALSE
All schemas in all databases are searched.
TRUE
All schemas in the current database are searched.
CLIENT_PREFETCH_THREADS¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Integer
- Clients
JDBC, Python
- Description
Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance.
- Values
1
to10
- Default
4
Most users should not need to set this parameter. If this parameter is not set by the user, the driver starts with the default specified above, but also actively manages its thread count conservatively to avoid using up all available memory.
CLIENT_RESULT_CHUNK_SIZE¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Integer
- Clients
JDBC
- Description
Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks.
Also see CLIENT_MEMORY_LIMIT.
- Values
48
to160
- Default
160
Most users should not need to set this parameter. If this parameter is not set by the user, the driver starts with the default specified above, but also actively manages its memory conservatively to avoid using up all available memory.
CLIENT_RESULT_COLUMN_CASE_INSENSITIVE¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Boolean
- Clients
JDBC
- Description
Parameter that indicates whether to match column name case-insensitively in
ResultSet.get*
methods in JDBC.- Values
TRUE
: matches column names case-insensitively.FALSE
: matches column names case-sensitively.- Default
FALSE
CLIENT_SESSION_KEEP_ALIVE¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Boolean
- Clients
JDBC, ODBC, Python, Node.js
- Description
Parameter that indicates whether to force a user to log in again after a period of inactivity in the session.
- Values
TRUE
: Snowflake keeps the session active indefinitely as long as the connection is active, even if there is no activity from the user.FALSE
: The user must log in again after four hours of inactivity.- Default
FALSE
CLIENT_SESSION_KEEP_ALIVE_HEARTBEAT_FREQUENCY¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Integer
- Clients
SnowSQL, JDBC, Python, Node.js
- Description
Number of seconds in-between client attempts to update the token for the session.
- Values
900
to3600
- Default
3600
CLIENT_TIMESTAMP_TYPE_MAPPING¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String (Constant)
- Clients
Any
- Description
Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data.
- Values
TIMESTAMP_LTZ
orTIMESTAMP_NTZ
- Default
TIMESTAMP_LTZ
DATA_RETENTION_TIME_IN_DAYS¶
- Type
Object (for databases, schemas, and tables) — Can be set for Account » Database » Schema » Table
- Data Type
Integer
- Description
Number of days for which Snowflake retains historical data for performing Time Travel actions (SELECT, CLONE, UNDROP) on the object. A value of
0
effectively disables Time Travel for the specified database, schema, or table. For more information, see Understanding & Using Time Travel.- Values
0
or1
(for Standard Edition)0
to90
(for Enterprise Edition or higher)- Default
1
DATE_INPUT_FORMAT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String
- Description
Specifies the input format for the DATE data type. For more information, see Date and Time Input / Output.
- Values
Any valid, supported date format or
AUTO
(
AUTO
specifies that Snowflake attempts to automatically detect the format of dates stored in the system during the session)- Default
AUTO
DATE_OUTPUT_FORMAT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String
- Description
Specifies the display format for the DATE data type. For more information, see Date and Time Input / Output.
- Values
Any valid, supported date format
- Default
YYYY-MM-DD
DEFAULT_DDL_COLLATION¶
- Type
Object (for databases, schemas, and tables) — Can be set for Account » Database » Schema » Table
- Data Type
String
- Description
Sets the default collation used for the following DDL operations:
ALTER TABLE … ADD COLUMN
Setting this parameter forces all subsequently-created columns in the affected objects (table, schema, database, or account) to have the specified collation as the default, unless the collation for the column is explicitly defined in the DDL.
For example, if
DEFAULT_DDL_COLLATION = 'en-ci'
, then the following two statements are equivalent:create table test(c1 integer, c2 string, c3 string collate 'en-cs') create table test(c1 integer, c2 string collate 'en-ci', c3 string collate 'en-cs');
- Values
Any valid, supported collation specification.
- Default
Empty string
Note
To set the default collation for the account, use the following command:
The default collation for table columns can be set at the table, schema, or database level during creation or any time afterwards:
ENABLE_UNLOAD_PHYSICAL_TYPE_OPTIMIZATION¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Boolean
- Description
Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table).
- Values
TRUE
: The schema of unloaded Parquet data files is determined by the column values in the unload SQL query or source table. Snowflake optimizes table columns by setting the smallest precision that accepts all of the values. The unloader follows this pattern when writing values to Parquet files. The data type and precision of an output column are set to the smallest data type and precision that support its values in the unload SQL statement or source table. Accept this setting for better performance and smaller data files.FALSE
: The schema is determined by the logical column data types. Set this value for a consistent output file schema.- Default
TRUE
ERROR_ON_NONDETERMINISTIC_MERGE¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Boolean
- Description
Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row.
- Values
TRUE
: An error is returned that includes values from one of the target rows that caused the error.FALSE
: No error is returned and the merge completes successfully, but the results of the merge are nondeterministic.- Default
TRUE
ERROR_ON_NONDETERMINISTIC_UPDATE¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Boolean
- Description
Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row.
- Values
TRUE
: An error is returned that includes values from one of the target rows that caused the error.FALSE
: No error is returned and the update completes, but the results of the update are nondeterministic.- Default
FALSE
GEOGRAPHY_OUTPUT_FORMAT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String (Constant)
- Description
Display format for geospatial values.
For EWKT and EWKB, the SRID is always 4326 in the output. See the note on EWKT and EWKB handling.
- Values
GeoJSON
,WKT
,WKB
,EWKT
, orEWKB
- Default
GeoJSON
INITIAL_REPLICATION_SIZE_LIMIT_IN_TB¶
- Type
Account — Can be set only for Account
- Data Type
Number.
- Description
Sets the maximum estimated size limit for the initial replication of a primary database to a secondary database (in TB). Set this parameter on any account that stores a secondary database. This size limit helps prevent accounts from accidentally incurring large database replication charges.
To remove the size limit, set the value to
0.0
.Note that there is currently no default size limit applied to subsequent refreshes of a secondary database.
- Values
0.0
and above with a scale of at least 1 (e.g.20.5
,32.25
,33.333
, etc.).- Default
10.0
JDBC_TREAT_DECIMAL_AS_INT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Boolean
- Description
Specifies how JDBC processes columns that have a scale of zero (
0
).- Values
TRUE
: JDBC processes a column whose scale is zero as BIGINT.FALSE
: JDBC processes a column whose scale is zero as DECIMAL.- Default
TRUE
JDBC_TREAT_TIMESTAMP_NTZ_AS_UTC¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Boolean
- Description
Specifies how JDBC processes TIMESTAMP_NTZ values.
By default, when the JDBC driver fetches a value of type TIMESTAMP_NTZ from Snowflake, the driver converts the value from UTC to the current timezone, based on the user’s session.
Users who wish to keep TIMESTAMP_NTZ values in UTC can set this parameter to TRUE.
This parameter applies only to the JDBC driver.
- Values
TRUE
: The driver does not change the TIMESTAMP_NTZ value; the value stays in UTC.FALSE
: The driver converts the TIMESTAMP_NTZ value from UTC to the user’scurrent timezone.
- Default
FALSE
JDBC_USE_SESSION_TIMEZONE¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Boolean
- Description
Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the
getDate()
,getTime()
, andgetTimestamp()
methods of theResultSet
class.- Values
TRUE
: The JDBC Driver uses the time zone of the session.FALSE
: The JDBC Driver uses the time zone of the JVM.- Default
TRUE
JSON_INDENT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Integer
- Description
Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element.
- Values
0
to16
(a value of
0
returns compact output by removing all blank spaces and newline characters from the output)- Default
2
Note
This parameter does not affect JSON unloaded from a table into a file using the COPY INTO <location> command. The command always unloads JSON data in the ndjson format:
Each record from the table separated by a newline character.
Within each record, compact formatting (i.e. no spaces or newline characters).
JS_TREAT_INTEGER_AS_BIGINT¶
- Type
Session — Can be set for Session
- Data Type
Boolean
- Description
Specifies how the Snowflake Node.js Driver processes numeric columns that have a scale of zero (
0
), for example INTEGER or NUMBER(p, 0).- Values
TRUE
: JavaScript processes a column whose scale is zero as Bigint.FALSE
: JavaScript processes a column whose scale is zero as Number.- Default
FALSE
Note
By default, Snowflake INTEGER columns (including BIGINT, NUMBER(p, 0), etc.) are converted to JavaScript’s Number data type. However, the largest legal Snowflake integer values are larger than the largest legal JavaScript Number values. To convert Snowflake INTEGER columns to JavaScript Bigint, which can store larger values than JavaScript Number, set the session parameter JS_TREAT_INTEGER_AS_BIGINT.
For examples of how to use this parameter, see Fetching Integer Data Types as Bigint.
LOCK_TIMEOUT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Number
- Description
Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement.
- Values
0
to any number (i.e. no limit)(a value of
0
disables lock waiting, i.e. the statement must acquire the lock immediately or abort; if multiple resources need to be locked by the statement, the timeout applies separately to each lock attempt)- Default
43200
(i.e. 12 hours)
MAX_CONCURRENCY_LEVEL¶
- Type
Object (for warehouses) — Can be set for Account » Warehouse
- Data Type
Number
- Description
Specifies the concurrency level for SQL statements (i.e. queries and DML) executed by a warehouse cluster. When the level is reached, the operation performed depends on whether the warehouse is single-cluster or multi-cluster:
Single-cluster or multi-cluster (in Maximized mode): Statements are queued until already-allocated resources are freed or additional resources are provisioned, which can be accomplished by increasing the size of the warehouse.
Multi-cluster (in Auto-scale mode): Additional clusters are started.
MAX_CONCURRENCY_LEVEL can be used in conjunction with the STATEMENT_QUEUED_TIMEOUT_IN_SECONDS parameter to ensure a warehouse is never backlogged.
Note that this parameter does not limit the number of statements that can be executed concurrently by a warehouse cluster. Instead, it serves as an upper-boundary to protect against over-allocation of resources. As each statement is submitted to a warehouse, Snowflake allocates resources for executing the statement; if there aren’t enough resources available, the statement is queued or additional clusters are started, depending on the warehouse.
The actual number of statements executed concurrently by a warehouse cluster might be more or less than the specified level:
Smaller, more basic statements: More statements might execute concurrently because small statements generally execute on a subset of the servers in a warehouse cluster. This means they only count as a fraction towards the concurrency level.
Larger, more complex statements: Fewer statements might execute concurrently.
- Default
8
Tip
This value is a default only and can be changed at any time:
Lowering the concurrency level for a warehouse increases the compute resource allocation per statement, which potentially results in faster query performance, particularly for large/complex and multi-statement queries.
Raising the concurrency level for a warehouse decreases the compute resource allocation per statement; however, it does not necessarily limit the total number of concurrent queries that can be executed by the warehouse, nor does it necessarily improve total warehouse performance, which depends on the nature of the queries being executed.
Note that, as described earlier, this parameter impacts multi-cluster warehouses (in Auto-scale mode) because Snowflake automatically starts a new cluster within the warehouse to avoid queuing. Thus, lowering the concurrency level for a multi-cluster warehouse (in Auto-scale mode) potentially increases the number of active clusters at any time.
Also, remember that Snowflake automatically allocates resources for each statement when it is submitted and the allocated amount is dictated by the individual requirements of the statement. Based on this, and through observations of user query patterns over time, we’ve selected a default that balances performance and resource usage.
As such, before changing the default, we recommend that you test the change by adjusting the parameter in small increments and observing the impact against a representative set of your queries.
MAX_DATA_EXTENSION_TIME_IN_DAYS¶
- Type
Object (for databases, schemas, and tables) — Can be set for Account » Database » Schema » Table
- Data Type
Integer
- Description
Maximum number of days for which Snowflake can extend the data retention period for tables to prevent streams on the tables from becoming stale. By default, if the DATA_RETENTION_TIME_IN_DAYS setting for a source table is less than 14 days, and a stream has not been consumed, Snowflake temporarily extends this period to the stream’s offset, up to a maximum of 14 days, regardless of the Snowflake Edition for your account. The MAX_DATA_EXTENSION_TIME_IN_DAYS parameter enables you to limit this automatic extension period to control storage costs for data retention or for compliance reasons.
This parameter can be set at the account, database, schema, and table levels. Note that setting the parameter at the account or schema level only affects tables for which the parameter has not already been explicitly set at a lower level (e.g. at the table level by the table owner). A value of 0
effectively disables the automatic extension for the specified database, schema, or table. For more information about streams and staleness, see Change Tracking Using Table Streams.
- Values
0
to90
(i.e. 90 days) — a value of0
disables the automatic extension of the data retention period. To increase the maximum value for tables in your account, contact Snowflake Support.- Default
14
Note
This parameter can cause data to be retained longer than the default data retention. Before increasing it, confirm that the new value fits your compliance requirements.
MULTI_STATEMENT_COUNT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Integer (Constant)
- Clients
JDBC, ODBC
- Description
Number of statements to execute when using the multi-statement capability.
- Values
0
: Variable number of statements.1
: One statement.Negative numbers are not permitted.
- Default
1
NETWORK_POLICY¶
- Type
Account — Can be set only for Account (can be set by account administrators and security administrators)
- Type
Object (for users) — Can be set for Account » User
- Data Type
String
- Description
Specifies the network policy to enforce for your account. Network policies enable restricting access to your account based on users’ IP address. For more details, see Network Policies.
- Values
Any existing network policy (created using CREATE NETWORK POLICY)
- Default
None
Note
This is the only account parameter that can be set by security administrators (i.e users with the SECURITYADMIN system role) or higher.
PERIODIC_DATA_REKEYING¶
- Type
Account — Can be set only for Account
- Data Type
Boolean
- Description
This parameter only applies to Enterprise Edition (or higher). It enables/disables re-encryption of table data with new keys on a yearly basis to provide additional levels of data protection.
You can enable and disable rekeying at any time. Enabling/disabling rekeying does not result in gaps in your encrypted data:
If rekeying is enabled for a period of time and then disabled, all data already tagged for rekeying is rekeyed, but no further data is rekeyed until you re-enable it again.
If rekeying is re-enabled, Snowflake automatically rekeys all data that has keys which meet the criteria (i.e. key is older than one year).
For more information about rekeying of encrypted data, see Data Encryption.
- Values
TRUE
: Data is rekeyed after one year has passed since the data was last encrypted. Rekeying occurs in the background so no down-time is experienced and the affected data/table is always available.FALSE
: Data is not rekeyed.- Default
FALSE
Note
There are charges associated with data rekeying because, after data is rekeyed, the old data (with the previous key encryption) is maintained in Fail-safe for the standard time period (7 days). For this reason, periodic rekeying is disabled by default. To enable periodic rekeying, you must explicitly enable it.
Also, Fail-safe charges for rekeying are not listed individually in your monthly statement; they are included in the Fail-safe total for your account each month.
For more information about Fail-safe, see Understanding & Viewing Fail-safe.
PIPE_EXECUTION_PAUSED¶
- Type
Object — Can be set for Account » Schema » Pipe
- Data Type
Boolean
- Description
Specifies whether to pause a running pipe, primarily in preparation for transferring ownership of the pipe to a different role:
An account administrator (user with the ACCOUNTADMIN role) can set this parameter at the account level, effectively pausing or resuming all pipes in the account.
A user with the MODIFY privilege on a schema can pause or resume all pipes in the schema.
The pipe owner can set this parameter for a pipe.
Note that setting the parameter at the account or schema level only affects pipes for which the parameter has not already been explicitly set at a lower level (e.g. at the pipe level by the pipe owner).
This enables the practical use case in which an account administrator can pause all pipes at the account level, while a pipe owner can still have an individual pipe running.
- Values
TRUE
: Pauses the pipe. When the parameter is set to this value, the SYSTEM$PIPE_STATUS function shows theexecutionState
asPAUSED
. Note that the pipe owner can continue to submit files to a paused pipe; however, the files are not processed until the pipe is resumed.FALSE
: Resumes the pipe, but only if ownership of the pipe has not been transferred while it was paused. When the parameter is set to this value, the SYSTEM$PIPE_STATUS function shows theexecutionState
asRUNNING
.If ownership of the pipe was transferred to another role after the pipe was paused, this parameter cannot be used to resume the pipe. Instead, use the SYSTEM$PIPE_FORCE_RESUME function to explicitly force the pipe to resume.
This enables the new owner to use SYSTEM$PIPE_STATUS to evaluate the pipe status (e.g. determine how many files are waiting to be loaded) before resuming the pipe.
- Default
FALSE
(pipes are running by default)
Note
In general, pipes do not need to paused, except for transferring ownership.
PREVENT_UNLOAD_TO_INLINE_URL¶
- Type
Account — Can be set only for Account
- Data Type
Boolean
- Description
Specifies whether to prevent ad hoc data unload operations to external cloud storage locations (i.e. COPY INTO <location> statements that specify the cloud storage URL and access settings directly in the statement). For an example, see Unloading Data from a Table Directly to Files in an External Location.
- Values
TRUE
: COPY INTO <location> statements must reference either a named internal (Snowflake) or external stage or an internal user or table stage. A named external stage must store the cloud storage URL and access settings in its definition.FALSE
: Ad hoc data unload operations to external cloud storage locations are permitted.- Default
FALSE
QUERY_TAG¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String (up to 2000 characters)
- Description
Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERY_HISTORY , QUERY_HISTORY_BY_* functions.
- Default
None
QUOTED_IDENTIFIERS_IGNORE_CASE¶
- Type
Session — Can be set for Account » User » Session
Table — Can be set for Table » Schema » Database
- Data Type
Boolean
- Description
Specifies whether the case of letters is ignored for all object identifiers in double quotes.
- Values
TRUE
: Double-quoted identifiers are case-insensitive (i.e. case is ignored and all letters are treated as uppercase).FALSE
: Double-quoted identifiers are case-sensitive; Snowflake resolves and stores the identifiers in the specified case.For more information, see Identifier Resolution.
- Default
FALSE
For example:
Identifier |
Param set to |
Param set to |
|
---|---|---|---|
|
resolves to: |
|
|
|
resolves to: |
|
|
|
resolves to: |
|
|
|
resolves to: |
|
|
REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_CREATION¶
- Type
Account — Can be set only for Account
- Data Type
Boolean
- Description
Specifies whether to require a storage integration object as cloud credentials when creating a named external stage (using CREATE STAGE) to access a private cloud storage location.
- Values
TRUE
: Creating an external stage to access a private cloud storage location requires referencing a storage integration object as cloud credentials.FALSE
: Creating an external stage does not require referencing a storage integration object. Users can instead reference explicit cloud provider credentials, such as secret keys or access tokens, if they have been configured for the storage location.- Default
FALSE
REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_OPERATION¶
- Type
Account — Can be set only for Account
- Data Type
Boolean
- Description
Specifies whether to require using a named external stage that references a storage integration object as cloud credentials when loading data from or unloading data to a private cloud storage location.
- Values
TRUE
: Loading data from or unloading data to a private cloud storage location requires using a named external stage that references a storage integration object; specifying a named external stage that references explicit cloud provider credentials, such as secret keys or access tokens, produces a user error.FALSE
: Users can load data from or unload data to a private cloud storage location using a named external stage that references explicit cloud provider credentials.If PREVENT_UNLOAD_TO_INLINE_URL is FALSE, then users can specify the explicit cloud provider credentials directly in the COPY statement.
- Default
FALSE
ROWS_PER_RESULTSET¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Number
- Description
Specifies the maximum number of rows returned in a result set.
- Values
0
to any number (i.e. no limit) — a value of0
specifies no maximum.- Default
0
SAML_IDENTITY_PROVIDER¶
- Type
Account — Can be set only for Account
- Data Type
JSON
- Description
Enables federated authentication. The parameter parameter accepts a JSON object, enclosed in single quotes, with the following fields:
{ "certificate": "", "issuer": "", "ssoUrl": "", "type" : "", "label" : "" }
Where:
certificate
Specifies the certificate (generated by the IdP) that verifies communication between the IdP and Snowflake.
issuer
Indicates the Issuer/EntityID of the IdP.
Optional.
For information on how to obtain this value in Okta and ADFS, see Configuring Snowflake to Use Federated Authentication.
ssoUrl
Specifies the URL endpoint (provided by the IdP) where Snowflake sends the SAML requests.
type
Specifies the type of IdP used for federated authentication (
"OKTA"
,"ADFS"
,"Custom"
).label
Specifies the button text for the IdP in the Snowflake login page. The default label is
Single Sign On
. If you change the default label, the label you specify can only contain alphanumeric characters (i.e. special characters and blank spaces are not currently supported).Note that, if the
"type"
field is"Okta"
, a value for thelabel
field does not need to be specified because Snowflake displays the Okta logo in the button.
For more information, including examples of setting the parameter, see Configuring Snowflake to Use Federated Authentication.
- Default
None
SIMULATED_DATA_SHARING_CONSUMER¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String
- Description
Specifies the name of a consumer account to simulate for testing/validating shared data, particularly shared secure views. When this parameter is set in a session, shared views will return rows as if executed in the specified consumer account rather than the provider account.
Note
At this time, the SIMULATED_DATA_SHARING_CONSUMER session parameter only supports secure views and secure materialized views, but does not support secure UDFs.
For more information, see Introduction to Secure Data Sharing and Working with Shares.
- Default
None
Important
This is a session parameter, which means it can be set at the account level; however, it only applies to testing queries on shared views. Because the parameter affects all queries in a session, it should never be set at the account level.
SSO_LOGIN_PAGE¶
- Type
Account — Can be set only for Account
- Data Type
Boolean
- Description
Disables preview mode for testing SSO (after enabling federated authentication) before rolling it out to users:
- Values
TRUE
: Preview mode is disabled and users will see the button for Snowflake-initiated SSO for your identity provider (as specified in SAML_IDENTITY_PROVIDER) in the Snowflake main login page.FALSE
: Preview mode is enabled and SSO can be tested using the following URL:If your account is in US West:
https://<account_name>.snowflakecomputing.com/console/login?fedpreview=true
If your account is in any other region:
https://<account_name>.<region_id>.snowflakecomputing.com/console/login?fedpreview=true
For more information, see Configuring Snowflake to Use Federated Authentication.
- Default
FALSE
STATEMENT_QUEUED_TIMEOUT_IN_SECONDS¶
- Type
Session and Object (for warehouses)
Can be set for Account » User » Session; can also be set for individual warehouses
- Data Type
Number
- Description
Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAX_CONCURRENCY_LEVEL parameter to ensure a warehouse is never backlogged.
The parameter can be set within the session hierarchy. It can also be set for a warehouse to control the queue timeout for all SQL statements processed by the warehouse. When the parameter is set for both a warehouse and a session, the lowest non-zero value is enforced. For example:
A warehouse has a queued timeout of 120 seconds.
The queued timeout for the session is set to 60 seconds.
The session timeout takes precedence (i.e. any statement submitted in the session is canceled after being queued for longer than 60 seconds).
- Values
0
to any number (i.e. no limit) — a value of0
specifies that no timeout is enforced. A statement will remained queued as long as the queue persists.- Default
0
(i.e. no timeout)
STATEMENT_TIMEOUT_IN_SECONDS¶
- Type
Session and Object (for warehouses)
Can be set for Account » User » Session; can also be set for individual warehouses
- Data Type
Number
- Description
Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system.
The parameter can be set within the session hierarchy. It can also be set for an individual warehouse to control the runtime for all SQL statements processed by the warehouse. When the parameter is set for both a warehouse and a session, the lowest non-zero value is enforced. For example:
A warehouse has a timeout of 1000 seconds.
The timeout for the session is set to 500 seconds.
The session timeout takes precedence (i.e. any statement submitted in the session is canceled after running for longer than 500 seconds).
- Values
0
to604800
(i.e. 7 days) — a value of0
specifies that the maximum timeout value is enforced.- Default
172800
(i.e. 2 days)
STRICT_JSON_OUTPUT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Boolean
- Description
This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org).
By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON.
- Values
TRUE
: Strict JSON output is enabled, enforcing the following behavior:Missing and undefined values in input mapped to JSON NULL.
Non-finite numeric values in input (Infinity, -Infinity, NaN, etc.) mapped to strings with valid JavaScript representations. This enables compatibility with JavaScript and also allows conversion of these values back to numeric values.
FALSE
: Strict JSON output is not enabled.- Default
FALSE
For example:
Non-standard JSON Input |
Param set to |
Param set to |
|
---|---|---|---|
|
outputs: |
|
|
|
outputs: |
|
|
|
outputs: |
|
|
|
outputs: |
|
|
TIMESTAMP_DAY_IS_ALWAYS_24H¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Boolean
- Description
Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days.
- Values
TRUE
: A day is always exactly 24 hours.FALSE
: A day is not always 24 hours.- Default
FALSE
Important
If set to TRUE
, the actual time of day might not be preserved when daylight saving time (DST) is in effect. For example:
alter session set TIMESTAMP_DAY_IS_ALWAYS_24H = true;
-- With DST beginning on 2018-03-11 at 2 AM, America/Los_Angeles time zone
select dateadd(day, 1, '2018-03-10 09:00:00'::TIMESTAMP_LTZ), dateadd(day, 1, '2018-11-03 09:00:00'::TIMESTAMP_LTZ);
+-------------------------------------------------------+-------------------------------------------------------+
| DATEADD(DAY, 1, '2018-03-10 09:00:00'::TIMESTAMP_LTZ) | DATEADD(DAY, 1, '2018-11-03 09:00:00'::TIMESTAMP_LTZ) |
|-------------------------------------------------------+-------------------------------------------------------|
| 2018-03-11 10:00:00.000 -0700 | 2018-11-04 08:00:00.000 -0800 |
+-------------------------------------------------------+-------------------------------------------------------+
alter session set TIMESTAMP_DAY_IS_ALWAYS_24H = false;
select dateadd(day, 1, '2018-03-10 09:00:00'::TIMESTAMP_LTZ), dateadd(day, 1, '2018-11-03 09:00:00'::TIMESTAMP_LTZ);
+-------------------------------------------------------+-------------------------------------------------------+
| DATEADD(DAY, 1, '2018-03-10 09:00:00'::TIMESTAMP_LTZ) | DATEADD(DAY, 1, '2018-11-03 09:00:00'::TIMESTAMP_LTZ) |
|-------------------------------------------------------+-------------------------------------------------------|
| 2018-03-11 09:00:00.000 -0700 | 2018-11-04 09:00:00.000 -0800 |
+-------------------------------------------------------+-------------------------------------------------------+
TIMESTAMP_INPUT_FORMAT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String
- Description
Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and Time Input / Output.
- Values
Any valid, supported timestamp format or
AUTO
(
AUTO
specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session)- Default
AUTO
TIMESTAMP_LTZ_OUTPUT_FORMAT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String
- Description
Specifies the display format for the TIMESTAMP_LTZ data type. If no format is specified, defaults to TIMESTAMP_OUTPUT_FORMAT. For more information, see Date and Time Input / Output.
- Values
Any valid, supported timestamp format
- Default
None
TIMESTAMP_NTZ_OUTPUT_FORMAT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String
- Description
Specifies the display format for the TIMESTAMP_NTZ data type.
- Values
Any valid, supported timestamp format
- Default
YYYY-MM-DD HH24:MI:SS.FF3
TIMESTAMP_OUTPUT_FORMAT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String
- Description
Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and Time Input / Output.
- Values
Any valid, supported timestamp format
- Default
YYYY-MM-DD HH24:MI:SS.FF3 TZHTZM
TIMESTAMP_TYPE_MAPPING¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String
- Description
Specifies the TIMESTAMP_* variation that the TIMESTAMP data type alias maps to.
- Values
TIMESTAMP_LTZ
,TIMESTAMP_NTZ
, orTIMESTAMP_TZ
- Default
TIMESTAMP_NTZ
TIMESTAMP_TZ_OUTPUT_FORMAT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String
- Description
Specifies the display format for the TIMESTAMP_TZ data type. If no format is specified, defaults to TIMESTAMP_OUTPUT_FORMAT. For more information, see Date and Time Input / Output.
- Values
Any valid, supported timestamp format
- Default
None
TIMEZONE¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String (Constant)
- Description
Specifies the time zone for the session. For a list of time zones, see the list of tz database time zones (in Wikipedia).
- Values
Standard iana.org time zones:
America/Los_Angeles
,Europe/London
,UTC
,Etc/GMT
, etc.- Default
America/Los_Angeles
Note
Time zone names are case-sensitive and must be enclosed in single quotes (e.g. 'UTC'
).
Also, certain simple time zones, such as PDT
, are not currently supported.
TIME_INPUT_FORMAT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String
- Description
Specifies the input format for the TIME data type. For more information, see Date and Time Input / Output.
- Values
Any valid, supported time format or
AUTO
(
AUTO
specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session)- Default
AUTO
TIME_OUTPUT_FORMAT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String
- Description
Specifies the display format for the TIME data type. For more information, see Date and Time Input / Output.
- Values
Any valid, supported time format
- Default
HH24:MI:SS
TRANSACTION_ABORT_ON_ERROR¶
- Type
Session — Can be set for Account » User » Session
- Data Type
BOOLEAN
- Description
Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error.
- Values
TRUE
: The non-autocommit transaction is aborted. All statements issued inside that transaction will fail until a commit or rollback statement is executed to close that transaction.FALSE
: The non-autocommit transaction is not aborted.- Default
FALSE
TRANSACTION_DEFAULT_ISOLATION_LEVEL¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String
- Description
Specifies the isolation level for transactions in the user session.
- Values
READ COMMITTED
(only currently-supported value)- Default
READ COMMITTED
TWO_DIGIT_CENTURY_START¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Number
- Description
Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the
YY
date format component (i.e. years represented as 2 digits).- Values
1900
to2100
(any value outside of this range returns an error)- Default
1970
For example:
Year |
Param set to |
Param set to |
Param set to |
Param set to |
Param set to |
|
---|---|---|---|---|---|---|
|
becomes: |
|
|
|
|
|
|
becomes: |
|
|
|
|
|
|
becomes: |
|
|
|
|
|
|
becomes: |
|
|
|
|
|
UNSUPPORTED_DDL_ACTION¶
- Type
Session — Can be set for Account » User » Session
- Data Type
String (Constant)
- Description
Specifies whether an unsupported (i.e. non-default) value specified for a constraint property returns an error.
- Values
IGNORE
: Snowflake does not return an error for unsupported values.FAIL
: Snowflake returns an error for unsupported values.- Default
IGNORE
Important
This parameter does not determine whether the constraint is created. Snowflake does not create constraints using unsupported values, regardless of how this parameter is set.
For more information, see Constraint Properties.
USE_CACHED_RESULT¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Boolean
- Description
Specifies whether to reuse cached query results, if available, when a matching query is submitted.
- Values
TRUE
: When a query is submitted, Snowflake checks for matching query results for previously-executed queries and, if a matching result exists, uses the result instead of executing the query. This can help reduce query time because Snowflake retrieves the result directly from the cache.FALSE
: Snowflake executes each query when submitted, regardless of whether a matching query result exists.- Default
TRUE
WEEK_OF_YEAR_POLICY¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Number
- Description
Specifies how the weeks in a given year are computed.
- Values
0
: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1
: January 1 is included in the first week of the year and December 31 is included in the last week of the year.- Default
0
(i.e. ISO-like behavior)
Tip
1
is the most common value, based on feedback we’ve received. For more information, including examples, see Calendar Weeks and Weekdays.
WEEK_START¶
- Type
Session — Can be set for Account » User » Session
- Data Type
Number
- Description
Specifies the first day of the week (used by week-related date functions).
- Values
0
: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1
(Monday) to7
(Sunday): All the week-related functions use weeks that start on the specified day of the week.- Default
0
(i.e. legacy Snowflake behavior)
Tip
1
is the most common value, based on feedback we’ve received. For more information, including examples, see Calendar Weeks and Weekdays.