Parameters¶
Snowflake provides parameters that let you control the behavior of your account, individual user sessions, and objects. All the parameters have default values, which can be set and then overridden at different levels depending on the parameter type (Account, Session, or Object).
Parameter hierarchy and types¶
This section describes the different types of parameters (Account, Session, and Object) and the levels at which each type can be set.
The following diagram illustrates the hierarchical relationship between the different parameter types and how individual parameters can be overridden at each level:
Account parameters¶
Account parameters can be set only at the account level by users with the appropriate administrator role. Account parameters are set using the ALTER ACCOUNT command.
Snowflake provides the following account parameters:
Parameter |
Notes |
---|---|
Used to enable connection caching in browser-based single sign-on (SSO) for Snowflake-provided clients |
|
Used for encryption of files staged for data loading or unloading; might require additional installation and configuration (see description for details). |
|
Used to enable cross-region processing of Snowflake Cortex calls in a different region if the call cannot be processed in your account region. |
|
Allows the SYSTEM$GET_PRIVATELINK_CONFIG function to return the |
|
Used to specify an image Repository’s choice to opt out of Tri-Secret Secure and Periodic rekeying. |
|
Used to specify a block storage volume’s choice to opt out of Tri-Secret Secure and Periodic rekeying. |
|
Used to set an event table as the active event table for the account. |
|
Used to set the minimum data retention period for retaining historical data for Time Travel operations. |
|
This is the only account parameter that can be set by either account administrators (i.e users with the ACCOUNTADMIN system role) or security administrators (i.e users with the SECURITYADMIN system role). . For more information, see Object parameters. |
|
Note
By default, account parameters are not displayed in the output of SHOW PARAMETERS. For more information about viewing account parameters, see Viewing the Parameters and Their Values (in this topic).
Session parameters¶
Most parameters are session parameters, which can be set at the following levels:
- Account:
Account administrators can use the ALTER ACCOUNT command to set session parameters for the account. The values set for the account default to individual users and their sessions.
- User:
Administrators with the appropriate privileges (typically SECURITYADMIN role) can use the ALTER USER command to override session parameters for individual users. The values set for a user default to any sessions started by the user. In addition, users can override default sessions parameters for themselves using ALTER USER.
- Session:
Users can use the ALTER SESSION to explicitly set session parameters within their sessions.
Note
By default, only session parameters are displayed in the output of SHOW PARAMETERS. For more information about viewing account and object parameters, see Viewing the Parameters and Their Values (in this topic).
Object parameters¶
Object parameters can be set at the following levels:
- Account:
Account administrators can use the ALTER ACCOUNT command to set object parameters for the account. The values set for the account default to the objects created in the account.
- Object:
Users with the appropriate privileges can use the corresponding CREATE <object> or ALTER <object> commands to override object parameters for an individual object.
Snowflake provides the following object parameters:
Parameter |
Object Type |
Notes |
---|---|---|
Snowflake Scripting stored procedure |
||
Database, Schema, Apache Iceberg™ table |
||
Table |
Specifies the schedule to run the data metric functions associated to the table. All data metric functions on the table or view follow the same schedule. |
|
Database, Schema, Table |
||
Database, Schema, Table |
||
User |
Affects the query history for queries that fail due to syntax or parsing errors. |
|
Database, Schema, Apache Iceberg™ table |
||
Database, Schema, Stored Procedure, Function |
||
Warehouse |
||
Database, Schema, Table |
||
User |
This is the only user parameter that can be set by either account administrators (users with the ACCOUNTADMIN system role) or security administrators (users with the SECURITYADMIN system role). If this parameter is set on the account and a user in the same account, the user-level network policy overrides the account-level network policy. |
|
Schema, Pipe |
||
User |
||
Database, Schema, file format, Apache Iceberg™ table |
Can only be set for Iceberg tables that use an external Iceberg catalog. |
|
Warehouse |
Also a session parameter (i.e. can be set at both the object and session levels). For inheritance and override details, see the parameter description. |
|
Warehouse |
Also a session parameter (i.e. can be set at both the object and session levels). For inheritance and override details, see the parameter description. |
|
Database, Schema, Apache Iceberg™ table |
This parameter is only supported for Iceberg tables that use Snowflake as the catalog. |
|
Database, Schema, Task |
||
Database, Schema, Task |
||
Database, Schema, Stored Procedure, Function |
||
Database, Schema, Task |
||
Database, Schema, Task |
||
Database, Schema, Task |
Note
By default, object parameters are not displayed in the output of SHOW PARAMETERS. For more information about viewing object parameters, see Viewing the Parameters and Their Values (in this topic).
Viewing the parameters and their values¶
Snowflake provides the SHOW PARAMETERS command, which displays a list of the parameters, along with the current and default values for each parameter. The command can be called with different options to determine the type of parameter displayed.
Viewing session and object parameters¶
By default, the command displays only session parameters:
SHOW PARAMETERS;
To display the object parameters for a specific object, include an IN
clause with the object type and name. For example:
SHOW PARAMETERS IN DATABASE mydb; SHOW PARAMETERS IN WAREHOUSE mywh;
Viewing all parameters¶
To display all parameters, including account and object parameters, include an IN ACCOUNT
clause:
SHOW PARAMETERS IN ACCOUNT;
Limiting the list of parameters by name¶
The command also supports using a LIKE
clause to limit the list of parameters by name. For example:
To display the session parameters whose names contain “time”:
SHOW PARAMETERS LIKE '%time%';
To display all the parameters whose names start with “time”:
SHOW PARAMETERS LIKE 'time%' IN ACCOUNT;
Note
The LIKE
clause must come before the IN clause.
ABORT_DETACHED_QUERY¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption).
- Values:
TRUE
: In-progress queries are aborted 5 minutes after connectivity is lost.FALSE
: In-progress queries are completed.- Default:
FALSE
Note
If the user explicitly closes the connection:
All in-progress synchronous queries are aborted immediately regardless of the parameter value.
When the value is set to
FALSE
, asynchronous queries continue to run until they complete, until they are canceled, or until the time limit specified for the STATEMENT_TIMEOUT_IN_SECONDS parameter expires. The default for the STATEMENT_TIMEOUT_IN_SECONDS parameter is two days.
Most queries require compute resources to execute. These resources are provided by virtual warehouses, which consume credits while running. With a value of
FALSE
, if the session terminates, warehouses might continue running and consuming credits to complete any queries that were in progress at the time the session terminated.
ALLOW_CLIENT_MFA_CACHING¶
- Type:
Account — Can only be set for Account
- Data Type:
Boolean
- Description:
Specifies whether an MFA token can be saved in the client-side operating system keystore to promote continuous, secure connectivity without users needing to respond to an MFA prompt at the start of each connection attempt to Snowflake. For details and the list of supported Snowflake-provided clients, see Using MFA token caching to minimize the number of prompts during authentication — optional.
- Values:
TRUE
: Stores an MFA token in the client-side operating system keystore to enable the client application to use the MFA token whenever a new connection is established. While true, users are not prompted to respond to additional MFA prompts.FALSE
: Does not store an MFA token. Users must respond to an MFA prompt whenever the client application establishes a new connection with Snowflake.- Default:
FALSE
ALLOW_ID_TOKEN¶
- Type:
Account — Can be set only for Account
- Data Type:
Boolean
- Description:
Specifies whether a connection token can be saved in the client-side operating system keystore to promote continuous, secure connectivity without users needing to enter login credentials at the start of each connection attempt to Snowflake. For details and the list of supported Snowflake-provided clients, see Using connection caching to minimize the number of prompts for authentication — Optional.
- Values:
TRUE
: Stores a connection token in the client-side operating system keystore to enable the client application to perform browser-based SSO without prompting users to authenticate whenever a new connection is established.FALSE
: Does not store a connection token. Users are prompted to authenticate whenever the client application establishes a new connection with Snowflake. SSO to Snowflake is still possible if this parameter is set to false.- Default:
FALSE
AUTO_EVENT_LOGGING¶
- Type:
Object (for Snowflake Scripting stored procedures)
- Data Type:
String (Constant)
- Description:
Controls whether Snowflake Scripting log messages and trace events are ingested automatically into the event table. To set this parameter, run the ALTER PROCEDURE command.
- Values:
LOGGING
: Automatically adds the following additional logging information to the event table when a procedure is executed:BEGIN/END of a Snowflake Scripting block.
BEGIN/END of a child job request.
This information is added to the event table only if the effective LOG_LEVEL is set to
TRACE
for the stored procedure.TRACING
: Automatically adds the following additional trace information to the event table when a stored procedure is executed:Exception catching.
Information about child job execution.
Child job statistics.
Stored procedure statistics, including execution time and input values.
This information is added to the event table only if the effective TRACE_LEVEL is set to
ALWAYS
orON_EVENT
for the stored procedure.ALL
: Automatically adds both the logging information added for theLOGGING
value and the trace information added for theTRACING
value.OFF
: Does not automatically add logging information or trace information to the event table.
- Default:
OFF
For more information about using this parameter, see Setting levels for logging, metrics, and tracing, Automatically add log messages about blocks and child jobs, and Automatically emit trace events for child jobs and exceptions.
AUTOCOMMIT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions.
- Values:
TRUE
: Autocommit is enabled.FALSE
: Autocommit is disabled, meaning DML statements must be explicitly committed or rolled back.- Default:
TRUE
AUTOCOMMIT_API_SUPPORTED (view-only)¶
- Type:
N/A
- Data Type:
Boolean
- Description:
For Snowflake internal use only. View-only parameter that indicates whether API support for autocommit is enabled for your account. If the value is
TRUE
, you can enable or disable autocommit through the APIs for the following drivers/connectors:
BINARY_INPUT_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String (Constant)
- Description:
The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary input and output.
- Values:
HEX
,BASE64
, orUTF8
/UTF-8
- Default:
HEX
BINARY_OUTPUT_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String (Constant)
- Description:
The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary input and output.
- Values:
HEX
orBASE64
- Default:
HEX
CATALOG¶
- Type:
Object (for databases, schemas, and Apache Iceberg™ tables) — Can be set for Account » Database » Schema » Iceberg table
- Data Type:
String
- Description:
Specifies the catalog for Apache Iceberg™ tables. For more information, see the Iceberg table documentation.
- Values:
SNOWFLAKE
or any valid catalog integration identifier.- Default:
None
CLIENT_ENABLE_LOG_INFO_STATEMENT_PARAMETERS¶
- Type:
Session — Can be set only for Session
- Data Type:
Boolean
- Clients:
JDBC
- Description:
Enables users to log the data values bound to PreparedStatements.
To see the values, you must not only set this session-level parameter to
TRUE
, but also set the connection parameter namedTRACING
to eitherINFO
orALL
.Set
TRACING
toALL
to see all debugging information and all binding information.Set
TRACING
toINFO
to see the binding parameter values and less other debug information.
Caution
If you bind confidential information, such as medical diagnoses or passwords, that information is logged. Snowflake recommends making sure that the log file is secure, or only using test data, when you set this parameter to
TRUE
.- Values:
TRUE
orFALSE
.- Default:
FALSE
CLIENT_ENCRYPTION_KEY_SIZE¶
- Type:
Account — Can be set only for Account
- Data Type:
Integer
- Clients:
Any
- Description:
Specifies the AES encryption key size, in bits, used by Snowflake to encrypt/decrypt files stored on internal stages (for loading/unloading data) when you use the
SNOWFLAKE_FULL
encryption type.- Values:
128
or256
- Default:
128
Note
This parameter is not used for encrypting/decrypting files stored in external stages (i.e. S3 buckets or Azure containers). Encryption/decryption of these files is accomplished using an external encryption key explicitly specified in the COPY command or in the named external stage referenced in the command.
If you are using the JDBC driver and you wish to set this parameter to 256 (for strong encryption), additional JCE policy files must be installed on each client machine from which data is loaded/unloaded. For more information about installing the required files, see Java requirements for the JDBC Driver.
If you are using the Python connector (or SnowSQL) and you wish to set this parameter to 256 (for strong encryption), no additional installation or configuration tasks are required.
CLIENT_MEMORY_LIMIT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Integer
- Clients:
JDBC, ODBC
- Description:
Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB).
For the JDBC driver:
To simplify JVM memory management, the parameter sets a global maximum memory usage limit for all queries.
CLIENT_RESULT_CHUNK_SIZE specifies the maximum size of each set (or chunk) of query results to download (in MB). The driver might require additional memory to process a chunk; if so, it will adjust memory usage during runtime to process at least one thread/query. Verify that CLIENT_MEMORY_LIMIT is set significantly higher than CLIENT_RESULT_CHUNK_SIZE to ensure sufficient memory is available.
For the ODBC driver:
This parameter is supported in version 2.22.0 and higher.
CLIENT_RESULT_CHUNK_SIZE
is not supported.
Note
The driver will attempt to honor the parameter value, but will cap usage at 80% of your system memory.
The memory usage limit set in this parameter does not apply to any other JDBC or ODBC driver operations (e.g. connecting to the database, preparing a query, or PUT and GET statements).
- Values:
Any valid number of megabytes.
- Default:
1536
(effectively 1.5 GB)Most users should not need to set this parameter. If this parameter is not set by the user, the driver starts with the default specified above.
In addition, the JDBC driver actively manages its memory conservatively to avoid using up all available memory.
CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX¶
- Type:
Session — Can be set for User » Session
- Data Type:
Boolean
- Clients:
JDBC, ODBC
- Description:
For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly.
For example, the
getTables()
JDBC method accepts a database name and schema name as arguments, and returns the names of the tables in the database and schema. If the database and schema arguments arenull
, then by default, the method searches all databases and all schemas in the account. Setting CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX toTRUE
narrows the search to the current database and schema specified by the connection context.In essence, setting this parameter to
TRUE
creates the following precedence for database and schema:Values passed as arguments to the functions/methods.
Values specified in the connection context (if any).
Default (all databases and all schemas).
For more details, see the information below.
This parameter applies to the following:
JDBC driver methods (for the
DatabaseMetaData
class):getColumns
getCrossReference
getExportedKeys
getForeignKeys
getFunctions
getImportedKeys
getPrimaryKeys
getSchemas
getTables
ODBC driver functions:
SQLTables
SQLColumns
SQLPrimaryKeys
SQLForeignKeys
SQLGetFunctions
SQLProcedures
- Values:
TRUE
: If the database and schema arguments arenull
, then the driver retrieves metadata for only the database and schema specified by the connection context.The interaction is described in more detail in the table below.
FALSE
: If the database and schema arguments arenull
, then the driver retrieves metadata for all databases and schemas in the account.- Default:
FALSE
- Additional Notes:
The connection context refers to the current database and schema for the session, which can be set using any of the following options:
Specify the default namespace for the user who connects to Snowflake (and initiates the session). This can be set for the user through the CREATE USER or ALTER USER command, but must be set before the user connects.
Specify the database and schema when connecting to Snowflake through the driver.
Issue a USE DATABASE or USE SCHEMA command within the session.
If the database or schema was specified by more than one of these, then the most recent one applies.
When CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX is set to
TRUE
:database argument
schema argument
Database used
Schema used
Non-null
Non-null
Argument
Argument
Non-null
Null
Argument
All schemas
Null
Non-null
Connection context
Argument
Null
Null
Connection context
Session context
Note
For the JDBC driver, this behavior applies to version 3.6.27 (and higher). For the ODBC driver, this behavior applies to version 2.12.96 (and higher).
If you want to search only the connection context database, but want to search all schemas within that database, see CLIENT_METADATA_USE_SESSION_DATABASE.
CLIENT_METADATA_USE_SESSION_DATABASE¶
- Type:
Session — Can be set for Session
- Data Type:
Boolean
- Clients:
JDBC
- Description:
This parameter applies to only the methods affected by CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX.
This parameter applies only when both of the following conditions are met:
CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX is
FALSE
or unset.No database or schema is passed to the relevant ODBC function or JDBC method.
For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases to the current database. The narrower search typically returns fewer rows and executes more quickly.
For more details, see the information below.
- Values:
TRUE
:The driver searches all schemas in the connection context’s database. (For more details about the connection context, see the documentation for CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX.)
FALSE
:The driver searches all schemas in all databases.
- Default:
FALSE
- Additional Notes:
When the database is null
and the schema is null
and CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX is FALSE:
CLIENT_METADATA_USE_SESSION_DATABASE
Behavior
FALSE
All schemas in all databases are searched.
TRUE
All schemas in the current database are searched.
CLIENT_PREFETCH_THREADS¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Integer
- Clients:
JDBC, ODBC, Python, .NET
- Description:
Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance.
- Values:
1
to10
- Default:
4
Most users should not need to set this parameter. If this parameter is not set by the user, the driver starts with the default specified above, but also actively manages its thread count conservatively to avoid using up all available memory.
CLIENT_RESULT_CHUNK_SIZE¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Integer
- Clients:
JDBC, Node.js, SQL API
- Description:
Parameter that specifies the maximum size of each set (or chunk) of query results to download (in MB). The JDBC driver downloads query results in chunks.
Also see CLIENT_MEMORY_LIMIT.
- Values:
16
to160
- Default:
160
Most users should not need to set this parameter. If this parameter is not set by the user, the driver starts with the default specified above, but also actively manages its memory conservatively to avoid using up all available memory.
CLIENT_RESULT_COLUMN_CASE_INSENSITIVE¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Clients:
JDBC
- Description:
Parameter that indicates whether to match column name case-insensitively in
ResultSet.get*
methods in JDBC.- Values:
TRUE
: matches column names case-insensitively.FALSE
: matches column names case-sensitively.- Default:
FALSE
CLIENT_SESSION_KEEP_ALIVE¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Clients:
JDBC, ODBC, Python, Node.js
- Description:
Parameter that indicates whether to force a user to log in again after a period of inactivity in the session.
- Values:
TRUE
: Snowflake keeps the session active indefinitely as long as the connection is active, even if there is no activity from the user.FALSE
: The user must log in again after four hours of inactivity.- Default:
FALSE
Note
Currently, the parameter only takes effect while initiating the session. You can modify the parameter value within the session level by executing an ALTER SESSION command, but it does not affect the session keep-alive functionality, such as extending the session. For information about setting the parameter at the session level, see the client documentation:
CLIENT_SESSION_KEEP_ALIVE_HEARTBEAT_FREQUENCY¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Integer
- Clients:
SnowSQL, JDBC, Python, Node.js
- Description:
Number of seconds in-between client attempts to update the token for the session.
- Values:
900
to3600
- Default:
3600
CLIENT_TIMESTAMP_TYPE_MAPPING¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String (Constant)
- Clients:
Any
- Description:
Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data.
- Values:
TIMESTAMP_LTZ
orTIMESTAMP_NTZ
- Default:
TIMESTAMP_LTZ
CORTEX_ENABLED_CROSS_REGION¶
- Type:
Account — Can be set only for Account
- Data Type:
String
- Description:
Specifies the regions where an inference request may be processed in case the request cannot be processed in the region where request is originally placed. Specifying
DISABLED
disables cross-region inferencing. For examples and details, see Cross-region inference.- Values:
This parameter can be set to one of the following:
DISABLED
ANY_REGION
Comma-separated list including one or more of the following values:
AWS_APJ
AWS_EU
AWS_US
AZURE_US
¶ Value
Behavior
DISABLED
Inference requests will only be handled in the region where the request is placed.
ANY_REGION
Inference requests may be routed to any region that supports cross-region inference (listed in this table) and that has availability, including the region where the request is placed.
AWS_APJ
Inference requests will be handled in either:
The region where the request is placed.
AWS ap-northeast-1
AWS_EU
Inference requests will be handled in either:
The region where the request is placed.
AWS eu-central-1
AWS_US
Inference requests will be handled in either:
The region where the request is placed.
Any of the following AWS cloud regions physically located within the United States:
AWS us-west-2
AWS us-east-1
AZURE_US
Inference requests will be handled in either:
The region where the request is placed.
Azure eastus2
- Default:
DISABLED
CSV_TIMESTAMP_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the format for TIMESTAMP values in CSV files downloaded from Snowsight.
If this parameter is not set, TIMESTAMP_LTZ_OUTPUT_FORMAT will be used for TIMESTAMP_LTZ values, TIMESTAMP_TZ_OUTPUT_FORMAT will be used for TIMESTAMP_TZ and TIMESTAMP_NTZ_OUTPUT_FORMAT for TIMESTAMP_NTZ values.
For more information, see Date and time input and output formats or Download your query results.
- Values:
Any valid, supported timestamp format.
- Default:
No value.
DATA_METRIC_SCHEDULE¶
- Type:
Object (for tables)
- Data type:
String
- Description:
Specifies the schedule to run the data metric functions associated to the table.
- Values:
The schedule can be based on a defined number of minutes, a cron expression, or a DML event on the table that does not involve reclustering. For details, see:
- Default:
No value.
DATA_RETENTION_TIME_IN_DAYS¶
- Type:
Object (for databases, schemas, and tables) — Can be set for Account » Database » Schema » Table
- Data Type:
Integer
- Description:
Number of days for which Snowflake retains historical data for performing Time Travel actions (SELECT, CLONE, UNDROP) on the object. A value of
0
effectively disables Time Travel for the specified database, schema, or table. For more information, see Understanding & using Time Travel.- Values:
0
or1
(for Standard Edition)0
to90
(for Enterprise Edition or higher)- Default:
1
DATE_INPUT_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the input format for the DATE data type. For more information, see Date and time input and output formats.
- Values:
Any valid, supported date format or
AUTO
(
AUTO
specifies that Snowflake attempts to automatically detect the format of dates stored in the system during the session)- Default:
AUTO
DATE_OUTPUT_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the display format for the DATE data type. For more information, see Date and time input and output formats.
- Values:
Any valid, supported date format
- Default:
YYYY-MM-DD
DEFAULT_DDL_COLLATION¶
- Type:
Object (for databases, schemas, and tables) — Can be set for Account » Database » Schema » Table
- Data Type:
String
- Description:
Sets the default collation used for the following DDL operations:
ALTER TABLE … ADD COLUMN
Setting this parameter forces all subsequently-created columns in the affected objects (table, schema, database, or account) to have the specified collation as the default, unless the collation for the column is explicitly defined in the DDL.
For example, if
DEFAULT_DDL_COLLATION = 'en-ci'
, then the following two statements are equivalent:create table test(c1 integer, c2 string, c3 string collate 'en-cs'); create table test(c1 integer, c2 string collate 'en-ci', c3 string collate 'en-cs');
- Values:
Any valid, supported collation specification.
- Default:
Empty string
Note
To set the default collation for the account, use the following command:
The default collation for table columns can be set at the table, schema, or database level during creation or any time afterwards:
ENABLE_IDENTIFIER_FIRST_LOGIN¶
- Type:
Account — Can be set only for Account
- Data Type:
Boolean
- Description:
Determines the login flow for users. When enabled, Snowflake prompts users for their username or email address before presenting authentication methods. For details, see Identifier-first login.
- Values:
TRUE
: Snowflake uses an identifier-first login flow to authenticate users.FALSE
: Snowflake presents all possible login options, even if those options don’t apply to a particular user.- Default:
FALSE
ENABLE_INTERNAL_STAGES_PRIVATELINK¶
- Type:
Account — Can be set only for Account
- Data Type:
Boolean
- Description:
Specifies whether the SYSTEM$GET_PRIVATELINK_CONFIG function returns the
private-internal-stages
key in the query result. The corresponding value in the query result is used during the configuration process for private connectivity to internal stages.- Values:
TRUE
: Returns theprivate-internal-stages
key and value in the query result.FALSE
: Does not return theprivate-internal-stages
key and value in the query result.- Default:
FALSE
ENABLE_TRI_SECRET_AND_REKEY_OPT_OUT_FOR_IMAGE_REPOSITORY¶
- Type:
Account — Can be set only for Account
- Data Type:
Boolean
- Description:
Specifies choice for the image repository to opt out of Tri-Secret Secure and Periodic rekeying.
- Values:
TRUE
: Opts out Tri-Secret Secure and Periodic Rekeying for Image Repository.FALSE
: Disallows the creation of an image repository for Tri-Secret Secure and periodic rekeying accounts. Similarly, disallows enabling Tri-Secret Secure and periodic rekeying for accounts that have enabled Image Repository.- Default:
FALSE
ENABLE_TRI_SECRET_AND_REKEY_OPT_OUT_FOR_SPCS_BLOCK_STORAGE¶
- Type:
Account — Can be set only for Account
- Data Type:
Boolean
- Description:
Specifies the choice for the Snowpark Container Services block storage volume to opt out of Tri-Secret Secure and Periodic rekeying.
- Values:
TRUE
: Opts out Tri-Secret Secure and periodic rekeying for Snowpark Container Services block storage volumes.FALSE
: Disallows the creation of block storage volumes for Tri-Secret Secure and periodic rekeying accounts. Similarly, disallows enabling Tri-Secret Secure and periodic rekeying for accounts that have enabled block storage volumes.- Default:
FALSE
ENABLE_UNHANDLED_EXCEPTIONS_REPORTING¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies whether Snowflake may capture – in an event table – log messages or trace event data for unhandled exceptions in procedure or UDF handler code. For more information, see Capturing messages from unhandled exceptions.
- Values:
TRUE
: Data about unhandled exceptions is captured as log or trace data if logging and tracing are enabled.FALSE
: Data about unhandled exceptions is not captured.- Default:
TRUE
ENABLE_UNLOAD_PHYSICAL_TYPE_OPTIMIZATION¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table).
- Values:
TRUE
: The schema of unloaded Parquet data files is determined by the column values in the unload SQL query or source table. Snowflake optimizes table columns by setting the smallest precision that accepts all of the values. The unloader follows this pattern when writing values to Parquet files. The data type and precision of an output column are set to the smallest data type and precision that support its values in the unload SQL statement or source table. Accept this setting for better performance and smaller data files.FALSE
: The schema is determined by the logical column data types. Set this value for a consistent output file schema.- Default:
TRUE
ENABLE_UNREDACTED_QUERY_SYNTAX_ERROR¶
- Type:
User — Can be set for Account » User
- Data Type:
Boolean
- Description:
Controls whether query text is redacted if a SQL query fails due to a syntax or parsing error. If
FALSE
, the content of a failed query is redacted in the views, pages, and functions that provide a query history.Only users with a role that is granted or inherits the AUDIT privilege can set the ENABLE_UNREDACTED_QUERY_SYNTAX_ERROR parameter.
When using the ALTER USER command to set the parameter to
TRUE
for a particular user, modify the user that you want to see the query text, not the user who executed the query (if those are different users).- Values:
TRUE
: Disables the redaction of query text for queries that fail due to a syntax or parsing error.FALSE
: Redacts the contents of a query from the views, pages, and functions that provide a query history when a query fails due to a syntax or parsing error.- Default:
FALSE
ENFORCE_NETWORK_RULES_FOR_INTERNAL_STAGES¶
- Type:
Account — Can be set only for Account
- Data Type:
Boolean
- Description:
Specifies whether a network policy that uses network rules can restrict access to AWS internal stages.
This parameter has no effect on network policies that do not use network rules.
This account-level parameter affects both account-level and user-level network policies.
For details about using network policies and network rules to restrict access to AWS internal stages, including the use of this parameter, see Protecting internal stages on AWS.
- Values:
TRUE
: Allows network polices that use network rules to restrict access to AWS internal stages. The network rule must also use the appropriateMODE
andTYPE
to restrict access to the internal stage.FALSE
: Network policies never restrict access to internal stages.- Default:
FALSE
ERROR_ON_NONDETERMINISTIC_MERGE¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row.
- Values:
TRUE
: An error is returned that includes values from one of the target rows that caused the error.FALSE
: No error is returned and the merge completes successfully, but the results of the merge are nondeterministic.- Default:
TRUE
ERROR_ON_NONDETERMINISTIC_UPDATE¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row.
- Values:
TRUE
: An error is returned that includes values from one of the target rows that caused the error.FALSE
: No error is returned and the update completes, but the results of the update are nondeterministic.- Default:
FALSE
EVENT_TABLE¶
- Type:
Account — Can be set only for Account
- Data Type:
String
- Description:
Specifies the name of the event table for logging messages from stored procedures and UDFs in this account.
- Values:
Any existing event table created by executing the CREATE EVENT TABLE command.
- Default:
None
EXTERNAL_OAUTH_ADD_PRIVILEGED_ROLES_TO_BLOCKED_LIST¶
- Type:
Account — Can be set only for Account
- Data Type:
Boolean
- Description:
Determines whether the ACCOUNTADMIN, ORGADMIN, and SECURITYADMIN roles can be used as the primary role when creating a Snowflake session based on the access token from the External OAuth authorization server.
- Values:
TRUE
: Adds the ACCOUNTADMIN, ORGADMIN, and SECURITYADMIN roles to theEXTERNAL_OAUTH_BLOCKED_ROLES_LIST
property of the External OAuth security integration, which means these roles cannot be used as the primary role when creating a Snowflake session using External OAuth authentication.FALSE
: Removes the ACCOUNTADMIN, ORGADMIN, and SECURITYADMIN from the list of blocked roles defined by theEXTERNAL_OAUTH_BLOCKED_ROLES_LIST
property of the External OAuth security integration.- Default:
TRUE
EXTERNAL_VOLUME¶
Object (for databases, schemas, and Apache Iceberg™ tables) — Can be set for Account » Database » Schema » Iceberg table
- Data Type:
String
- Description:
Specifies the external volume for Apache Iceberg™ tables. For more information, see the Iceberg table documentation.
- Values:
Any valid external volume identifier.
- Default:
None
GEOGRAPHY_OUTPUT_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String (Constant)
- Description:
Display format for GEOGRAPHY values.
For EWKT and EWKB, the SRID is always 4326 in the output. Refer to the note on EWKT and EWKB handling.
- Values:
GeoJSON
,WKT
,WKB
,EWKT
, orEWKB
- Default:
GeoJSON
GEOMETRY_OUTPUT_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String (Constant)
- Description:
Display format for GEOMETRY values.
- Values:
GeoJSON
,WKT
,WKB
,EWKT
, orEWKB
- Default:
GeoJSON
HYBRID_TABLE_LOCK_TIMEOUT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Integer
- Description:
Number of seconds to wait while trying to acquire row-level locks on a hybrid table, before timing out and aborting the statement.
- Values:
0
to any integer (no limit). A value of0
disables lock waiting (that is, the statement must acquire the lock immediately or abort). This value specifies how long the statement will wait for all of the row-level locks it needs to acquire after each execution attempt (1 hour by default). If the statement cannot acquire all of the locks, it can be retried, and the same waiting period is applied.- Default:
3600
(1 hour)
See also LOCK_TIMEOUT.
INITIAL_REPLICATION_SIZE_LIMIT_IN_TB¶
- Type:
Account — Can be set only for Account
- Data Type:
Number.
- Description:
Sets the maximum estimated size limit for the initial replication of a primary database to a secondary database (in TB). Set this parameter on any account that stores a secondary database. This size limit helps prevent accounts from accidentally incurring large database replication charges.
To remove the size limit, set the value to
0.0
.Note that there is currently no default size limit applied to subsequent refreshes of a secondary database.
- Values:
0.0
and above with a scale of at least 1 (e.g.20.5
,32.25
,33.333
, etc.).- Default:
10.0
JDBC_ENABLE_PUT_GET¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies whether to allow PUT and GET commands access to local file systems.
- Values:
TRUE
: JDBC enables PUT and GET commands.FALSE
: JDBC disables PUT and GET commands.- Default:
TRUE
JDBC_TREAT_DECIMAL_AS_INT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies how JDBC processes columns that have a scale of zero (
0
).- Values:
TRUE
: JDBC processes a column whose scale is zero as BIGINT.FALSE
: JDBC processes a column whose scale is zero as DECIMAL.- Default:
TRUE
JDBC_TREAT_TIMESTAMP_NTZ_AS_UTC¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies how JDBC processes TIMESTAMP_NTZ values.
By default, when the JDBC driver fetches a value of type TIMESTAMP_NTZ from Snowflake, it converts the value to “wallclock” time using the client JVM timezone.
Users who want to keep UTC timezone for the conversion can set this parameter to
TRUE
.This parameter applies only to the JDBC driver.
- Values:
TRUE
: The driver uses UTC to get the TIMESTAMP_NTZ value in “wallclock” time.FALSE
: The driver uses the client JVM’s current timezone to get the TIMESTAMP_NTZ value in “wallclock” time.- Default:
FALSE
JDBC_USE_SESSION_TIMEZONE¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the
getDate()
,getTime()
, andgetTimestamp()
methods of theResultSet
class.- Values:
TRUE
: The JDBC Driver uses the time zone of the session.FALSE
: The JDBC Driver uses the time zone of the JVM.- Default:
TRUE
JSON_INDENT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Integer
- Description:
Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element.
- Values:
0
to16
(a value of
0
returns compact output by removing all blank spaces and newline characters from the output)- Default:
2
Note
This parameter does not affect JSON unloaded from a table into a file using the COPY INTO <location> command. The command always unloads JSON data in the NDJSON format:
Each record from the table separated by a newline character.
Within each record, compact formatting (i.e. no spaces or newline characters).
JS_TREAT_INTEGER_AS_BIGINT¶
- Type:
Session — Can be set for Session
- Data Type:
Boolean
- Description:
Specifies how the Snowflake Node.js Driver processes numeric columns that have a scale of zero (
0
), for example INTEGER or NUMBER(p, 0).- Values:
TRUE
: JavaScript processes a column whose scale is zero as Bigint.FALSE
: JavaScript processes a column whose scale is zero as Number.- Default:
FALSE
Note
By default, Snowflake INTEGER columns (including BIGINT, NUMBER(p, 0), etc.) are converted to JavaScript’s Number data type. However, the largest legal Snowflake integer values are larger than the largest legal JavaScript Number values. To convert Snowflake INTEGER columns to JavaScript Bigint, which can store larger values than JavaScript Number, set the session parameter JS_TREAT_INTEGER_AS_BIGINT.
For examples of how to use this parameter, see Fetching integer data types as Bigint.
LOCK_TIMEOUT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Integer
- Description:
Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement.
- Values:
0
to any integer (no limit). A value of0
disables lock waiting (the statement must acquire the lock immediately or abort). If multiple resources need to be locked by the statement, the timeout applies separately to each lock attempt.- Default:
43200
(12 hours)
See also HYBRID_TABLE_LOCK_TIMEOUT.
LOG_LEVEL¶
- Type:
Session — Can be set for Account » User » Session
- Type:
Object (for databases, schemas, stored procedures, and UDFs) — Can be set for Account » Database » Schema » Procedure and Account » Database » Schema » Function
- Data Type:
String (Constant)
- Description:
Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting levels for logging, metrics, and tracing.
- Values:
TRACE
DEBUG
INFO
WARN
ERROR
FATAL
OFF
- Default:
OFF
- Additional Notes:
The following table lists the levels of messages ingested when you set the
LOG_LEVEL
parameter to a level.LOG_LEVEL Parameter Setting
Levels of Messages Ingested
TRACE
TRACE
DEBUG
INFO
WARN
ERROR
FATAL
DEBUG
DEBUG
INFO
WARN
ERROR
FATAL
INFO
INFO
WARN
ERROR
FATAL
WARN
WARN
ERROR
FATAL
ERROR
ERROR
FATAL
FATAL
ERROR
(Only for Java UDFs, Java UDTFs, and Java and Scala stored procedures. For more information, see Understanding logging, metrics, and tracing levels)FATAL
If this parameter is set in both the session and the object (or schema, database, or account), the more verbose value is used. See Understanding how Snowflake determines level in effect.
MAX_CONCURRENCY_LEVEL¶
- Type:
Object (for warehouses) — Can be set for Account » Warehouse
- Data Type:
Number
- Description:
Specifies the concurrency level for SQL statements (i.e. queries and DML) executed by a warehouse. When the level is reached, the operation performed depends on whether the warehouse is a single-cluster or multi-cluster warehouse:
Single-cluster or multi-cluster (in Maximized mode): Statements are queued until already-allocated resources are freed or additional resources are provisioned, which can be accomplished by increasing the size of the warehouse.
Multi-cluster (in Auto-scale mode): Additional clusters are started.
MAX_CONCURRENCY_LEVEL can be used in conjunction with the STATEMENT_QUEUED_TIMEOUT_IN_SECONDS parameter to ensure a warehouse is never backlogged.
In general, it limits the number of statements that can be executed concurrently by a warehouse cluster, but there are exceptions. In the following cases, the actual number of statements executed concurrently by a warehouse might be more or less than the specified level:
Smaller, more basic statements: More statements might execute concurrently because small statements generally execute on a subset of the available compute resources in a warehouse. This means they only count as a fraction towards the concurrency level.
Larger, more complex statements: Fewer statements might execute concurrently.
- Default:
8
Tip
This value is a default only and can be changed at any time:
Lowering the concurrency level for a warehouse can limit the number of concurrent queries running in a warehouse. When fewer queries are competing for the warehouse’s resources at a given time, a query can potentially be given more resources, which might result in faster query performance, particularly for a large/complex and multi-statement query.
Raising the concurrency level for a warehouse might decrease the compute resources that are available for a statement; however, it does not always limit the total number of concurrent queries that can be executed by the warehouse, nor does it necessarily impact total warehouse performance, which depends on the nature of the queries being executed.
Note that, as described earlier, this parameter impacts multi-cluster warehouses (in Auto-scale mode) because Snowflake automatically starts a new cluster within the multi-cluster warehouse to avoid queuing. Thus, lowering the concurrency level for a multi-cluster warehouse (in Auto-scale mode) potentially increases the number of active clusters at any time.
Also, remember that Snowflake automatically allocates resources for each statement when it is submitted and the allocated amount is dictated by the individual requirements of the statement. Based on this, and through observations of user query patterns over time, we’ve selected a default that balances performance and resource usage.
As such, before changing the default, we recommend that you test the change by adjusting the parameter in small increments and observing the impact against a representative set of your queries.
MAX_DATA_EXTENSION_TIME_IN_DAYS¶
- Type:
Object (for databases, schemas, and tables) — Can be set for Account » Database » Schema » Table
- Data Type:
Integer
- Description:
Maximum number of days Snowflake can extend the data retention period for tables to prevent streams on the tables from becoming stale. By default, if the DATA_RETENTION_TIME_IN_DAYS setting for a source table is less than 14 days, and a stream has not been consumed, Snowflake temporarily extends this period to the stream’s offset, up to a maximum of 14 days, regardless of the Snowflake Edition for your account. The MAX_DATA_EXTENSION_TIME_IN_DAYS parameter enables you to limit this automatic extension period to control storage costs for data retention or for compliance reasons.
This parameter can be set at the account, database, schema, and table levels. Note that setting the parameter at the account or schema level only affects tables for which the parameter has not already been explicitly set at a lower level (e.g. at the table level by the table owner). A value of 0
effectively disables the automatic extension for the specified database, schema, or table. For more information about streams and staleness, see Introduction to Streams.
- Values:
0
to90
(i.e. 90 days) — a value of0
disables the automatic extension of the data retention period. To increase the maximum value for tables in your account, contact Snowflake Support.- Default:
14
Note
This parameter can cause data to be retained longer than the default data retention. Before increasing it, confirm that the new value fits your compliance requirements.
Table retention is not extended for streams on shared tables. If you share a table, ensure that you set the table retention time long enough for your data consumer to consume the stream. If a provider shares a table with, for example, 7 days’ retention and keeps the 14-day default extension, the stream will be stale after 14 days in the provider account and after 7 days in the consumer account.
METRIC_LEVEL¶
- Type:
Session — Can be set for Account » User » Session
- Type:
Object (for databases, schemas, stored procedures, and UDFs) — Can be set for Account » Database » Schema » Procedure and Account » Database » Schema » Function
- Data Type:
String (Constant)
- Description:
Controls how metrics data is ingested into the event table. For more information about metric levels, see Setting levels for logging, metrics, and tracing.
- Values:
ALL
: All metrics data will be recorded in the event table.NONE
: No metrics data will be recorded in the event table.- Default:
NONE
MULTI_STATEMENT_COUNT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Integer (Constant)
- Clients:
SQL API, JDBC, .NET, ODBC
- Description:
Number of statements to execute when using the multi-statement capability.
- Values:
0
: Variable number of statements.1
: One statement.More than
1
: When MULTI_STATEMENT_COUNT is set as a session parameter, you can specify the exact number of statements to execute.Negative numbers are not permitted.
- Default:
1
MIN_DATA_RETENTION_TIME_IN_DAYS¶
- Type:
Account — Can be set only for Account
- Data Type:
Integer
- Description:
Minimum number of days for which Snowflake retains historical data for performing Time Travel actions (SELECT, CLONE, UNDROP) on an object. If a minimum number of days for data retention is set on an account, the data retention period for an object is determined by MAX(DATA_RETENTION_TIME_IN_DAYS, MIN_DATA_RETENTION_TIME_IN_DAYS).
For more information, see Understanding & using Time Travel.
- Values:
0
or1
(for Standard Edition)0
to90
(for Enterprise Edition or higher)- Default:
0
Note
This parameter only applies to permanent tables and does not apply to the following objects:
Transient tables
Temporary tables
External tables
Materialized views
Streams
This parameter can only be set and unset by account administrators (i.e. users with the ACCOUNTADMIN role or other role that is granted the ACCOUNTADMIN role).
Setting the minimum data retention time does not alter any existing DATA_RETENTION_TIME_IN_DAYS parameter value set on databases, schemas, or tables. The effective retention time of a database, schema, or table is MAX(DATA_RETENTION_TIME_IN_DAYS, MIN_DATA_RETENTION_TIME_IN_DAYS).
NETWORK_POLICY¶
- Type:
Account — Can be set only for Account (can be set by account administrators and security administrators)
- Type:
Object (for users) — Can be set for Account » User
- Data Type:
String
- Description:
Specifies the network policy to enforce for your account. Network policies enable restricting access to your account based on users’ IP address. For more details, see Controlling network traffic with network policies.
- Values:
Any existing network policy (created using CREATE NETWORK POLICY)
- Default:
None
Note
This is the only account parameter that can be set by security administrators (i.e users with the SECURITYADMIN system role) or higher.
NOORDER_SEQUENCE_AS_DEFAULT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column.
The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order.
- Values:
TRUE
: When you create a new sequence or add a new table column, the NOORDER property is set by default.NOORDER specifies that the values are not guaranteed to be in increasing order.
For example, if a sequence has
START 1 INCREMENT 2
, the generated values might be1
,3
,101
,5
,103
, etc.NOORDER can improve performance when multiple INSERT operations are performed concurrently (for example, when multiple clients are executing multiple INSERT statements).
FALSE
: When you create a new sequence or add a new table column, the ORDER property is set by default.ORDER specifies that the values generated for a sequence or auto-incremented column are in increasing order (or, if the interval is a negative value, in decreasing order).
For example, if a sequence or auto-incremented column has
START 1 INCREMENT 2
, the generated values might be1
,3
,5
,7
,9
, etc.
If you set this parameter, the value that you set overrides the value in the 2024_01 behavior change bundle.
- Default:
TRUE
ODBC_TREAT_DECIMAL_AS_INT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies how ODBC processes columns that have a scale of zero (
0
).- Values:
TRUE
: ODBC processes a column whose scale is zero as BIGINT.FALSE
: ODBC processes a column whose scale is zero as DECIMAL.- Default:
FALSE
OAUTH_ADD_PRIVILEGED_ROLES_TO_BLOCKED_LIST¶
- Type:
Account — Can be set only for Account
- Data Type:
Boolean
- Description:
Determines whether the ACCOUNTADMIN, ORGADMIN, and SECURITYADMIN roles can be used as the primary role when creating a Snowflake session based on the access token from Snowflake’s authorization server.
- Values:
TRUE
: Adds the ACCOUNTADMIN, ORGADMIN, and SECURITYADMIN roles to theBLOCKED_ROLES_LIST
property of the Snowflake OAuth security integration, which means these roles cannot be used as the primary role when creating a Snowflake session using Snowflake OAuth.FALSE
: Removes the ACCOUNTADMIN, ORGADMIN, and SECURITYADMIN from the list of blocked roles defined by theBLOCKED_ROLES_LIST
property of the Snowflake OAuth security integration.- Default:
TRUE
PERIODIC_DATA_REKEYING¶
- Type:
Account — Can be set only for Account
- Data Type:
Boolean
- Description:
This parameter only applies to Enterprise Edition (or higher). It enables/disables re-encryption of table data with new keys on a yearly basis to provide additional levels of data protection.
You can enable and disable rekeying at any time. Enabling/disabling rekeying does not result in gaps in your encrypted data:
If rekeying is enabled for a period of time and then disabled, all data already tagged for rekeying is rekeyed, but no further data is rekeyed until you re-enable it again.
If rekeying is re-enabled, Snowflake automatically rekeys all data that has keys which meet the criteria (i.e. key is older than one year).
For more information about rekeying of encrypted data, see Understanding Encryption Key Management in Snowflake.
- Values:
TRUE
: Data is rekeyed after one year has passed since the data was last encrypted. Rekeying occurs in the background so no down-time is experienced and the affected data/table is always available.FALSE
: Data is not rekeyed.- Default:
FALSE
Note
There are charges associated with data rekeying because, after data is rekeyed, the old data (with the previous key encryption) is maintained in Fail-safe for the standard time period (7 days). For this reason, periodic rekeying is disabled by default. To enable periodic rekeying, you must explicitly enable it.
Also, Fail-safe charges for rekeying are not listed individually in your monthly statement; they are included in the Fail-safe total for your account each month.
For more information about Fail-safe, see Understanding and viewing Fail-safe.
PIPE_EXECUTION_PAUSED¶
- Type:
Object — Can be set for Account » Schema » Pipe
- Data Type:
Boolean
- Description:
Specifies whether to pause a running pipe, primarily in preparation for transferring ownership of the pipe to a different role:
An account administrator (user with the ACCOUNTADMIN role) can set this parameter at the account level, effectively pausing or resuming all pipes in the account.
A user with the MODIFY privilege on a schema can pause or resume all pipes in the schema.
The pipe owner can set this parameter for a pipe.
Note that setting the parameter at the account or schema level only affects pipes for which the parameter has not already been explicitly set at a lower level (e.g. at the pipe level by the pipe owner).
This enables the practical use case in which an account administrator can pause all pipes at the account level, while a pipe owner can still have an individual pipe running.
- Values:
TRUE
: Pauses the pipe. When the parameter is set to this value, the SYSTEM$PIPE_STATUS function shows theexecutionState
asPAUSED
. Note that the pipe owner can continue to submit files to a paused pipe; however, the files are not processed until the pipe is resumed.FALSE
: Resumes the pipe, but only if ownership of the pipe has not been transferred while it was paused. When the parameter is set to this value, the SYSTEM$PIPE_STATUS function shows theexecutionState
asRUNNING
.If ownership of the pipe was transferred to another role after the pipe was paused, this parameter cannot be used to resume the pipe. Instead, use the SYSTEM$PIPE_FORCE_RESUME function to explicitly force the pipe to resume.
This enables the new owner to use SYSTEM$PIPE_STATUS to evaluate the pipe status (e.g. determine how many files are waiting to be loaded) before resuming the pipe.
- Default:
FALSE
(pipes are running by default)
Note
In general, pipes do not need to paused, except for transferring ownership.
PREVENT_UNLOAD_TO_INLINE_URL¶
- Type:
Account — Can be set only for Account
- Data Type:
Boolean
- Description:
Specifies whether to prevent ad hoc data unload operations to external cloud storage locations (i.e. COPY INTO <location> statements that specify the cloud storage URL and access settings directly in the statement). For an example, see Unloading data from a table directly to files in an external location.
- Values:
TRUE
: COPY INTO <location> statements must reference either a named internal (Snowflake) or external stage or an internal user or table stage. A named external stage must store the cloud storage URL and access settings in its definition.FALSE
: Ad hoc data unload operations to external cloud storage locations are permitted.- Default:
FALSE
PREVENT_UNLOAD_TO_INTERNAL_STAGES¶
- Type:
User — Can be set for Account » User
- Data Type:
Boolean
- Description:
Specifies whether to prevent data unload operations to internal (Snowflake) stages using COPY INTO <location> statements.
- Values:
TRUE
: Unloading data from Snowflake tables to any internal stage, including user stages, table stages, or named internal stages is prevented.FALSE
: Unloading data to internal stages is permitted, limited only by the default restrictions of the stage type:The current user can only unload data to their own user stage.
Users can only unload data to table stages when their active role has the OWNERSHIP privilege on the table.
Users can only unload data to named internal stages when their active role has the WRITE privilege on the stage.
- Default:
FALSE
QUERY_TAG¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String (up to 2000 characters)
- Description:
Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERY_HISTORY , QUERY_HISTORY_BY_* functions.
- Default:
None
QUOTED_IDENTIFIERS_IGNORE_CASE¶
- Type:
Session — Can be set for Account » User » Session
Object — Can be set for Account » Database » Schema » Table
- Data Type:
Boolean
- Description:
Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers. (see Identifier resolution.) You can use this parameter in situations in which third-party applications always use double quotes around identifiers.
Note
Changing this parameter from the default value can affect your ability to find objects that were previously created with double-quoted mixed case identifiers. Refer to Impact of changing the parameter.
When set on a table, schema, or database, the setting only affects the evaluation of table names in the bodies of views and user-defined functions (UDFs). If your account uses double-quoted identifiers that should be treated as case-insensitive and you plan to share a view or UDF with an account that treats double-quoted identifiers as case-sensitive, you can set this on the view or UDF that you plan to share. This allows the other account to resolve the table names in the view or UDF correctly.
- Values:
TRUE
: Letters in double-quoted identifiers are stored and resolved as uppercase letters.FALSE
: The case of letters in double-quoted identifiers is preserved. Snowflake resolves and stores the identifiers in the specified case.For more information, see Identifier resolution.
- Default:
FALSE
For example:
Identifier |
Param set to |
Param set to |
|
---|---|---|---|
|
resolves to: |
|
|
|
resolves to: |
|
|
|
resolves to: |
|
|
|
resolves to: |
|
|
REPLACE_INVALID_CHARACTERS¶
- Type:
Object — Can be set for Account » Database » Schema » Iceberg table
- Data Type:
Boolean
- Description:
Specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (�) in query results for Apache Iceberg™ tables that use an external catalog.
- Values:
TRUE
: Snowflake replaces invalid UTF-8 characters with the Unicode replacement character.FALSE
: Snowflake leaves invalid UTF-8 characters unchanged. Snowflake returns a user error message if it encounters an invalid UTF-8 character.- Default:
FALSE
REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_CREATION¶
- Type:
Account — Can be set only for Account
- Data Type:
Boolean
- Description:
Specifies whether to require a storage integration object as cloud credentials when creating a named external stage (using CREATE STAGE) to access a private cloud storage location.
- Values:
TRUE
: Creating an external stage to access a private cloud storage location requires referencing a storage integration object as cloud credentials.FALSE
: Creating an external stage does not require referencing a storage integration object. Users can instead reference explicit cloud provider credentials, such as secret keys or access tokens, if they have been configured for the storage location.- Default:
FALSE
REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_OPERATION¶
- Type:
Account — Can be set only for Account
- Data Type:
Boolean
- Description:
Specifies whether to require using a named external stage that references a storage integration object as cloud credentials when loading data from or unloading data to a private cloud storage location.
- Values:
TRUE
: Loading data from or unloading data to a private cloud storage location requires using a named external stage that references a storage integration object; specifying a named external stage that references explicit cloud provider credentials, such as secret keys or access tokens, produces a user error.FALSE
: Users can load data from or unload data to a private cloud storage location using a named external stage that references explicit cloud provider credentials.If PREVENT_UNLOAD_TO_INLINE_URL is FALSE, then users can specify the explicit cloud provider credentials directly in the COPY statement.
- Default:
FALSE
ROWS_PER_RESULTSET¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Number
- Clients:
SQL API
- Description:
Specifies the maximum number of rows returned in a result set.
- Values:
0
to any number (i.e. no limit) — a value of0
specifies no maximum.- Default:
0
S3_STAGE_VPCE_DNS_NAME¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect.
For more information, see Accessing Internal stages with dedicated interface endpoints.
- Values:
Valid region-scoped DNS Name of an S3 interface endpoint.
The standard format begins with an asterisk (
*
) and ends withvpce.amazonaws.com
(e.g.*.vpce-sd98fs0d9f8g.s3.us-west-2.vpce.amazonaws.com
). For more details about obtaining this value, refer to AWS configuration.Alternative formats include
bucket.vpce-xxxxxxxx.s3.<region>.vpce.amazonaws.com
andvpce-xxxxxxxx.s3.<region>.vpce.amazonaws.com
.- Default:
Empty string
SAML_IDENTITY_PROVIDER¶
- Type:
Account — Can be set only for Account
- Data Type:
JSON
- Description:
Enables federated authentication. This deprecated parameter enables federated authentication. This parameter accepts a JSON object, enclosed in single quotes, with the following fields:
{ "certificate": "", "issuer": "", "ssoUrl": "", "type" : "", "label" : "" }
Where:
certificate
Specifies the certificate (generated by the IdP) that verifies communication between the IdP and Snowflake.
issuer
Indicates the Issuer/EntityID of the IdP.
Optional.
For information on how to obtain this value in Okta and AD FS, see Migrating to a SAML2 security integration.
ssoUrl
Specifies the URL endpoint (provided by the IdP) where Snowflake sends the SAML requests.
type
Specifies the type of IdP used for federated authentication (
"OKTA"
,"ADFS"
,"Custom"
).label
Specifies the button text for the IdP in the Snowflake login page. The default label is
Single Sign On
. If you change the default label, the label you specify can only contain alphanumeric characters (i.e. special characters and blank spaces are not currently supported).Note that, if the
"type"
field is"Okta"
, a value for thelabel
field does not need to be specified because Snowflake displays the Okta logo in the button.
For more information, including examples of setting the parameter, see Migrating to a SAML2 security integration.
- Default:
None
SEARCH_PATH¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the path to search to resolve unqualified object names in queries. For more information, see Name resolution in queries.
- Values:
Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name.
- Default:
$current, $public
For more information about the default settings, see default search path.
Note
You cannot set this parameter within a client connection string, such as a JDBC or ODBC connection string. You must establish a session before setting a search path.
SIMULATED_DATA_SHARING_CONSUMER¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the name of a consumer account to simulate for testing/validating shared data, particularly shared secure views. When this parameter is set in a session, shared views return rows as if executed in the specified consumer account rather than the provider account.
Note
Simulations only succeed when the current role is the owner of the view. If the current role does not own the view, simulations fail with the error:
Shared view consumer simulation requires that the executing role owns the view.
For more information, see About Secure Data Sharing and Create and configure shares.
- Default:
None
Important
This is a session parameter, which means it can be set at the account level; however, it only applies to testing queries on shared views. Because the parameter affects all queries in a session, it should never be set at the account level.
SSO_LOGIN_PAGE¶
- Type:
Account — Can be set only for Account
- Data Type:
Boolean
- Description:
This deprecated parameter disables preview mode for testing SSO (after enabling federated authentication) before rolling it out to users:
- Values:
TRUE
: Preview mode is disabled and users will see the button for Snowflake-initiated SSO for your identity provider (as specified in SAML_IDENTITY_PROVIDER) in the Snowflake main login page.FALSE
: Preview mode is enabled and SSO can be tested using the following URL:If your account is in US West:
https://<account_identifier>.snowflakecomputing.com/console/login?fedpreview=true
If your account is in any other region:
https://<account_identifier>.<region_id>.snowflakecomputing.com/console/login?fedpreview=true
For more information, see:
- Default:
FALSE
STATEMENT_QUEUED_TIMEOUT_IN_SECONDS¶
- Type:
Session and Object (for warehouses)
Can be set for Account » User » Session; can also be set for individual warehouses
- Data Type:
Number
- Description:
Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAX_CONCURRENCY_LEVEL parameter to ensure a warehouse is never backlogged.
The parameter can be set within the session hierarchy. It can also be set for a warehouse to control the queue timeout for all SQL statements processed by the warehouse. When the parameter is set for both a warehouse and a session, the lowest non-zero value is enforced. For example:
A warehouse has a queued timeout of 120 seconds.
The queued timeout for the session is set to 60 seconds.
The session timeout takes precedence (i.e. any statement submitted in the session is canceled after being queued for longer than 60 seconds).
Note
For runs of tasks, the USER_TASK_TIMEOUT_MS task parameter takes precedence over the STATEMENT_QUEUED_TIMEOUT_IN_SECONDS setting.
When comparing the values of these two parameters, note that STATEMENT_QUEUED_TIMEOUT_IN_SECONDS is set in units of seconds, while USER_TASK_TIMEOUT_MS uses units of milliseconds.
For more information about USER_TASK_TIMEOUT_MS, see the Optional Parameters section of CREATE TASK.
- Values:
0
to any number (i.e. no limit) — a value of0
specifies that no timeout is enforced. A statement will remained queued as long as the queue persists.- Default:
0
(i.e. no timeout)
STATEMENT_TIMEOUT_IN_SECONDS¶
- Type:
Session and Object (for warehouses)
Can be set for Account » User » Session; can also be set for individual warehouses
- Data Type:
Number
- Description:
Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system.
The parameter can be set within the session hierarchy. It can also be set for an individual warehouse to control the runtime for all SQL statements processed by the warehouse. When the parameter is set for both a warehouse and a session, the lowest non-zero value is enforced. For example:
A warehouse has a timeout of 1000 seconds.
The timeout for the session is set to 500 seconds.
The session timeout takes precedence (i.e. any statement submitted in the session is canceled after running for longer than 500 seconds).
Note
For runs of tasks:
If a task relies on a virtual warehouse for its compute resources and STATEMENT_TIMEOUT_IN_SECONDS is set at the warehouse level, then the effective timeout is the smaller of the following parameters:
STATEMENT_TIMEOUT_IN_SECONDS
USER_TASK_TIMEOUT_MS (parameter set on the task)
Otherwise, the USER_TASK_TIMEOUT_MS task parameter takes precedence over the STATEMENT_TIMEOUT_IN_SECONDS setting for task runs.
When comparing the values of these two parameters, note that STATEMENT_TIMEOUT_IN_SECONDS is set in units of seconds, while USER_TASK_TIMEOUT_MS uses units of milliseconds.
For more information about USER_TASK_TIMEOUT_MS, see the Optional Parameters section of CREATE TASK.
- Values:
0
to604800
(i.e. 7 days) — a value of0
specifies that the maximum timeout value is enforced.- Default:
172800
(i.e. 2 days)
STORAGE_SERIALIZATION_POLICY¶
- Type:
Object (for databases, schemas, and Apache Iceberg™ tables) — Can be set for Account » Database » Schema » Iceberg table
- Data Type:
String (Constant)
- Description:
Specifies the storage serialization policy for Snowflake-managed Apache Iceberg™ tables.
- Values:
COMPATIBLE
: Snowflake performs encoding and compression that ensures interoperability with third-party compute engines.OPTIMIZED
: Snowflake performs encoding and compression that ensures the best table performance within Snowflake.- Default:
OPTIMIZED
STRICT_JSON_OUTPUT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org).
By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON.
- Values:
TRUE
: Strict JSON output is enabled, enforcing the following behavior:Missing and undefined values in input mapped to JSON NULL.
Non-finite numeric values in input (Infinity, -Infinity, NaN, etc.) mapped to strings with valid JavaScript representations. This enables compatibility with JavaScript and also allows conversion of these values back to numeric values.
FALSE
: Strict JSON output is not enabled.- Default:
FALSE
For example:
Non-standard JSON Input |
Param set to |
Param set to |
|
---|---|---|---|
|
outputs: |
|
|
|
outputs: |
|
|
|
outputs: |
|
|
|
outputs: |
|
|
SUSPEND_TASK_AFTER_NUM_FAILURES¶
- Type:
Object (for databases, schemas, and tasks) — Can be set for Account » Database » Schema » Task
- Data Type:
Integer
- Description:
Number of consecutive failed task runs after which a standalone task or task graph root task is suspended automatically. Failed task runs include runs in which the SQL code in the task body either produces a user error or times out. Task runs that are skipped, canceled, or that fail due to a system error are considered indeterminate and are not included in the count of failed task runs.
When the parameter is set to
0
, the failed task is not automatically suspended.When the parameter is set to a value greater than
0
, the following behavior applies to runs of standalone tasks or task graph root tasks:A standalone task is automatically suspended after the specified number of consecutive task runs either fail or time out.
A root task is automatically suspended after the run of any single task in a task graph fails or times out the specified number of times in consecutive runs.
The default value for the parameter is set to
10
, which means that the task is automatically suspended after 10 consecutive failed task runs.When you explicitly set the parameter value at the account, database, or schema level, the change is applied to tasks contained in the modified object during their next scheduled run (including any child task in a task graph run in progress).
Suspending a standalone task resets its count of failed task runs. Suspending the root task of a task graph resets the count for each task in the task graph.
- Values:
0
- No upper limit.- Default:
10
TASK_AUTO_RETRY_ATTEMPTS¶
- Type:
Object (for databases, schemas, and tasks) — Can be set for Account » Database » Schema » Task
- Data Type:
Integer
- Description:
Specifies the number of automatic task graph retry attempts. If any task graphs complete in a
FAILED
state, Snowflake can automatically retry the task graphs from the last task in the graph that failed. Failed task runs include runs in which the SQL code in the task body either produces a user error or times out. Task runs that are skipped or canceled are considered indeterminate and are not included in the count of failed task runs.The automatic task graph retry is disabled by default. To enable this feature, set
TASK_AUTO_RETRY_ATTEMPTS
to a value greater than0
.When you set the parameter value at the account, database, or schema level, the change is applied to tasks contained in the modified object during their next scheduled run.
- Values:
0
- No upper limit.- Default:
0
TIMESTAMP_DAY_IS_ALWAYS_24H¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days.
- Values:
TRUE
: A day is always exactly 24 hours.FALSE
: A day is not always 24 hours.- Default:
FALSE
Important
If set to TRUE
, the actual time of day might not be preserved when daylight saving time (DST) is in effect. For example:
alter session set TIMESTAMP_DAY_IS_ALWAYS_24H = true;
-- With DST beginning on 2018-03-11 at 2 AM, America/Los_Angeles time zone
select dateadd(day, 1, '2018-03-10 09:00:00'::TIMESTAMP_LTZ), dateadd(day, 1, '2018-11-03 09:00:00'::TIMESTAMP_LTZ);
+-------------------------------------------------------+-------------------------------------------------------+
| DATEADD(DAY, 1, '2018-03-10 09:00:00'::TIMESTAMP_LTZ) | DATEADD(DAY, 1, '2018-11-03 09:00:00'::TIMESTAMP_LTZ) |
|-------------------------------------------------------+-------------------------------------------------------|
| 2018-03-11 10:00:00.000 -0700 | 2018-11-04 08:00:00.000 -0800 |
+-------------------------------------------------------+-------------------------------------------------------+
alter session set TIMESTAMP_DAY_IS_ALWAYS_24H = false;
select dateadd(day, 1, '2018-03-10 09:00:00'::TIMESTAMP_LTZ), dateadd(day, 1, '2018-11-03 09:00:00'::TIMESTAMP_LTZ);
+-------------------------------------------------------+-------------------------------------------------------+
| DATEADD(DAY, 1, '2018-03-10 09:00:00'::TIMESTAMP_LTZ) | DATEADD(DAY, 1, '2018-11-03 09:00:00'::TIMESTAMP_LTZ) |
|-------------------------------------------------------+-------------------------------------------------------|
| 2018-03-11 09:00:00.000 -0700 | 2018-11-04 09:00:00.000 -0800 |
+-------------------------------------------------------+-------------------------------------------------------+
TIMESTAMP_INPUT_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats.
- Values:
Any valid, supported timestamp format or
AUTO
(
AUTO
specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session)- Default:
AUTO
TIMESTAMP_LTZ_OUTPUT_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the display format for the TIMESTAMP_LTZ data type. If CSV_TIMESTAMP_FORMAT is not set, TIMESTAMP_LTZ_OUTPUT_FORMAT is used when downloading CSV files. For more information, see Date and time input and output formats.
- Values:
Any valid, supported timestamp format
- Default:
None
TIMESTAMP_NTZ_OUTPUT_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the display format for the TIMESTAMP_NTZ data type. If CSV_TIMESTAMP_FORMAT is not set, TIMESTAMP_NTZ_OUTPUT_FORMAT is used when downloading CSV files. For more information, see Date and time input and output formats.
- Values:
Any valid, supported timestamp format
- Default:
YYYY-MM-DD HH24:MI:SS.FF3
TIMESTAMP_OUTPUT_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and time input and output formats.
- Values:
Any valid, supported timestamp format
- Default:
YYYY-MM-DD HH24:MI:SS.FF3 TZHTZM
TIMESTAMP_TYPE_MAPPING¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the TIMESTAMP_* variation that the TIMESTAMP data type alias maps to.
- Values:
TIMESTAMP_LTZ
,TIMESTAMP_NTZ
, orTIMESTAMP_TZ
- Default:
TIMESTAMP_NTZ
TIMESTAMP_TZ_OUTPUT_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the display format for the TIMESTAMP_TZ data type. If CSV_TIMESTAMP_FORMAT is not set, TIMESTAMP_TZ_OUTPUT_FORMAT is used when downloading CSV files. For more information, see Date and time input and output formats.
- Values:
Any valid, supported timestamp format
- Default:
None
TIMEZONE¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String (Constant)
- Description:
Specifies the time zone for the session.
- Values:
You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g.
America/Los_Angeles
,Europe/London
,UTC
,Etc/GMT
, etc.).- Default:
America/Los_Angeles
Note
Time zone names are case-sensitive and must be enclosed in single quotes (e.g.
'UTC'
).Snowflake does not support the majority of timezone abbreviations (e.g.
PDT
,EST
, etc.) because a given abbreviation might refer to one of several different time zones. For example,CST
might refer to Central Standard Time in North America (UTC-6), Cuba Standard Time (UTC-5), and China Standard Time (UTC+8).
TIME_INPUT_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the input format for the TIME data type. For more information, see Date and time input and output formats.
- Values:
Any valid, supported time format or
AUTO
(
AUTO
specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session)- Default:
AUTO
TIME_OUTPUT_FORMAT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the display format for the TIME data type. For more information, see Date and time input and output formats.
- Values:
Any valid, supported time format
- Default:
HH24:MI:SS
TRACE_LEVEL¶
- Type:
Session — Can be set for Account » User » Session
- Type:
Object (for databases, schemas, stored procedures, and UDFs) — Can be set for Account » Database » Schema » Procedure and Account » Database » Schema » Function
- Data Type:
String (Constant)
- Description:
Controls how trace events are ingested into the event table. For more information about trace levels, see Setting levels for logging, metrics, and tracing.
- Values:
ALWAYS
: All spans and trace events will be recorded in the event table.ON_EVENT
: Trace events will be recorded in the event table only when your stored procedures or UDFs explicitly add events.OFF
: No spans or trace events will be recorded in the event table.- Default:
OFF
Note
When tracing events, you must also set the LOG_LEVEL parameter to one of its supported values.
TRANSACTION_ABORT_ON_ERROR¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
BOOLEAN
- Description:
Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error.
- Values:
TRUE
: The non-autocommit transaction is aborted. All statements issued inside that transaction will fail until a commit or rollback statement is executed to close that transaction.FALSE
: The non-autocommit transaction is not aborted.- Default:
FALSE
TRANSACTION_DEFAULT_ISOLATION_LEVEL¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String
- Description:
Specifies the isolation level for transactions in the user session.
- Values:
READ COMMITTED
(only currently-supported value)- Default:
READ COMMITTED
TWO_DIGIT_CENTURY_START¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Number
- Description:
Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the
YY
date format component (i.e. years represented as 2 digits).- Values:
1900
to2100
(any value outside of this range returns an error)- Default:
1970
For example:
Year |
Param set to |
Param set to |
Param set to |
Param set to |
Param set to |
|
---|---|---|---|---|---|---|
|
becomes: |
|
|
|
|
|
|
becomes: |
|
|
|
|
|
|
becomes: |
|
|
|
|
|
|
becomes: |
|
|
|
|
|
UNSUPPORTED_DDL_ACTION¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
String (Constant)
- Description:
Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error.
- Values:
IGNORE
: Snowflake does not return an error for unsupported values.FAIL
: Snowflake returns an error for unsupported values.- Default:
IGNORE
Important
This parameter does not determine whether the constraint is created. Snowflake does not create constraints using unsupported values, regardless of how this parameter is set.
For more information, see Constraint properties.
USE_CACHED_RESULT¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Boolean
- Description:
Specifies whether to reuse persisted query results, if available, when a matching query is submitted.
- Values:
TRUE
: When a query is submitted, Snowflake checks for matching query results for previously-executed queries and, if a matching result exists, uses the result instead of executing the query. This can help reduce query time because Snowflake retrieves the result directly from the cache.FALSE
: Snowflake executes each query when submitted, regardless of whether a matching query result exists.- Default:
TRUE
USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE¶
- Type:
Object (for databases, schemas, and tasks) — Can be set for Account » Database » Schema » Task
- Data Type:
String
- Description:
Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. If the task history is unavailable for a given task, the compute resources revert to this initial size.
Note
This parameter applies only to serverless tasks.
The size is equivalent to the compute resources available when creating a warehouse. If the parameter is omitted, the first runs of the task are executed using a medium-sized (
MEDIUM
) warehouse.You can change the initial size for individual tasks (using ALTER TASK) after the task is created but before it has run successfully once. Changing the parameter after the first run of this task starts has no effect on the compute resources for current or future task runs.
Note that suspending and resuming a task does not remove the task history used to size the compute resources. The task history is only removed if the task is recreated (using the CREATE OR REPLACE TASK syntax).
- Values:
Any traditional warehouse size:
SMALL
,MEDIUM
,LARGE
, etc., with a maximum size ofXXLARGE
.- Default:
MEDIUM
USER_TASK_MINIMUM_TRIGGER_INTERVAL_IN_SECONDS¶
- Type:
Object (for databases, schemas, and tasks) — Can be set for Account » Database » Schema » Task
- Data Type:
Number
- Description:
Defines how frequently a triggered task can execute in seconds. If data changes occur more often than the specified minimum, changes will be grouped and processed together.
The task will run every 12 hours even if this value is set to more than 12 hours.
- Values:
10
-604800
(1 week).- Default:
30
USER_TASK_TIMEOUT_MS¶
- Type:
Object (for databases, schemas, and tasks) — Can be set for Account » Database » Schema » Task
- Data Type:
Number
- Description:
Specifies the time limit on a single run of the task before it times out (in milliseconds).
Note
Before you increase the time limit for tasks significantly, consider whether the SQL statements in the task definitions could be optimized (either by rewriting the statements or using stored procedures) or whether the warehouse size for tasks with user-managed compute resources should be increased.
In some situations, the STATEMENT_TIMEOUT_IN_SECONDS parameter has a higher precedence than USER_TASK_TIMEOUT_MS. For details, see STATEMENT_TIMEOUT_IN_SECONDS.
- Values:
0
-86400000
(1 day).- Default:
3600000
(1 hour)
WEEK_OF_YEAR_POLICY¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Number
- Description:
Specifies how the weeks in a given year are computed.
- Values:
0
: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.1
: January 1 is included in the first week of the year and December 31 is included in the last week of the year.- Default:
0
(i.e. ISO-like behavior)
Tip
1
is the most common value, based on feedback we’ve received. For more information, including examples, see Calendar weeks and weekdays.
WEEK_START¶
- Type:
Session — Can be set for Account » User » Session
- Data Type:
Number
- Description:
Specifies the first day of the week (used by week-related date functions).
- Values:
0
: Legacy Snowflake behavior is used (i.e. ISO-like semantics).1
(Monday) to7
(Sunday): All the week-related functions use weeks that start on the specified day of the week.- Default:
0
(i.e. legacy Snowflake behavior)
Tip
1
is the most common value, based on feedback we’ve received. For more information, including examples, see Calendar weeks and weekdays.