CREATE DATABASE¶

Creates a new database in the system.

In addition, this command can be used to:

See also:

ALTER DATABASE , DESCRIBE DATABASE , DROP DATABASE , SHOW DATABASES , UNDROP DATABASE

DESCRIBE SHARE , SHOW SHARES, CREATE LISTING

Syntax¶

Standard Database

CREATE [ OR REPLACE ] [ TRANSIENT ] DATABASE [ IF NOT EXISTS ] <name>
    [ CLONE <source_schema>
        [ { AT | BEFORE } ( { TIMESTAMP => <timestamp> | OFFSET => <time_difference> | STATEMENT => <id> } ) ]
        [ IGNORE TABLES WITH INSUFFICIENT DATA RETENTION ]
        [ IGNORE HYBRID TABLES ] ]
    [ DATA_RETENTION_TIME_IN_DAYS = <integer> ]
    [ MAX_DATA_EXTENSION_TIME_IN_DAYS = <integer> ]
    [ EXTERNAL_VOLUME = <external_volume_name> ]
    [ CATALOG = <catalog_integration_name> ]
    [ REPLACE_INVALID_CHARACTERS = { TRUE | FALSE } ]
    [ DEFAULT_DDL_COLLATION = '<collation_specification>' ]
    [ STORAGE_SERIALIZATION_POLICY = { COMPATIBLE | OPTIMIZED } ]
    [ COMMENT = '<string_literal>' ]
    [ [ WITH ] TAG ( <tag_name> = '<tag_value>' [ , <tag_name> = '<tag_value>' , ... ] ) ]
Copy

Standard Database (from a listing)

CREATE DATABASE <name> FROM LISTING '<listing_global_name>'
Copy

Shared Database (from a Share)

CREATE DATABASE <name> FROM SHARE <provider_account>.<share_name>
Copy

Secondary Database (Database Replication)

CREATE DATABASE <name>
    AS REPLICA OF <account_identifier>.<primary_db_name>
    [ DATA_RETENTION_TIME_IN_DAYS = <integer> ]
Copy

Required parameters¶

name

Specifies the identifier for the database; must be unique for your account.

In addition, the identifier must start with an alphabetic character and cannot contain spaces or special characters unless the entire identifier string is enclosed in double quotes (e.g. "My object"). Identifiers enclosed in double quotes are also case-sensitive.

For more details, see Identifier requirements.

Important

As a best practice for Database Replication and Failover, we recommend giving each secondary database the same name as its primary database. This practice supports referencing fully-qualified objects (i.e. '<db>.<schema>.<object>') by other objects in the same database, such as querying a fully-qualified table name in a view.

If a secondary database has a different name from the primary database, then these object references would break in the secondary database.

Secure Data Sharing parameters¶

provider_account.share_name

Specifies the identifier of the share from which to create the database. As documented, the name of the share must be fully-qualified with the name of the account providing the share.

Database replication parameters¶

Attention

Snowflake recommends using the account replication feature to replicate databases.

AS REPLICA OF account_identifier.primary_db_name

Specifies the identifier for a primary database from which to create a replica (i.e. a secondary database). If the identifier contains spaces, special characters, or mixed-case characters, the entire string must be enclosed in double quotes.

Requires the account identifier and name of the primary database.

account_identifier

Unique identifier of the account that stores the primary database. The preferred identifier is organization_name.account_name. To view the list of accounts enabled for replication in your organization, query SHOW REPLICATION ACCOUNTS.

Though the legacy account locator can also be used as the account identifier, its use is discouraged as it may not work in the future. For more details about using the account locator as an account identifier, see Database Replication Usage Notes.

primary_db_name

Name of the primary database. As a best practice, we recommend giving each secondary database the same name as its primary database.

Note

As a best practice for Database Replication and Failover, we recommend setting the optional parameter DATA_RETENTION_TIME_IN_DAYS to the same value on the secondary database as on the primary database.

Listing parameters¶

'listing_global_name'

Specifies the global name of the listing from which to create the database, which must meet the following requirements:

  • Can’t be a monetized listing.

  • Listing terms, if not of type OFFLINE, must have been accepted using Snowsight.

  • Listing data products must be available locally in the current region.

    Whether a listing is available in the local region can be determined by viewing the is_ready_for_import column of DESCRIBE AVAILABLE LISTING.

Optional parameters¶

TRANSIENT

Specifies a database as transient. Transient databases do not have a Fail-safe period so they do not incur additional storage costs once they leave Time Travel; however, this means they are also not protected by Fail-safe in the event of a data loss. For more information, see Understanding and viewing Fail-safe.

In addition, by definition, all schemas (and consequently all tables) created in a transient database are transient. For more information about transient tables, see CREATE TABLE.

Default: No value (i.e. database is permanent)

CLONE source_db

Specifies to create a clone of the specified source database. For more details about cloning a database, see CREATE <object> … CLONE.

AT | BEFORE ( TIMESTAMP => timestamp | OFFSET => time_difference | STATEMENT => id )

When cloning a database, the AT | BEFORE clause specifies to use Time Travel to clone the database at or before a specific point in the past. If the specified Time Travel time is at or before the point in time when the database was created, the cloning operation fails with an error.

IGNORE TABLES WITH INSUFFICIENT DATA RETENTION

Ignore tables that no longer have historical data available in Time Travel to clone. If the time in the past specified in the AT | BEFORE clause is beyond the data retention period for any child table in a database or schema, skip the cloning operation for the child table. For more information, see Child Objects and Data Retention Time.

IGNORE HYBRID TABLES

Ignore hybrid tables, which will not be cloned. Use this option to clone a database that contains hybrid tables. The cloned database includes other objects but skips hybrid tables.

If you don’t use this option and your database contains one or more hybrid tables, the command ignores hybrid tables silently. However, the error handling for databases that contain hybrid tables will change in an upcoming release; therefore, you may want to add this parameter to your commands preemptively.

DATA_RETENTION_TIME_IN_DAYS = integer

Specifies the number of days for which Time Travel actions (CLONE and UNDROP) can be performed on the database, as well as specifying the default Time Travel retention time for all schemas created in the database. For more details, see Understanding & using Time Travel.

For a detailed description of this object-level parameter, as well as more information about object parameters, see Parameters.

Values:

  • Standard Edition: 0 or 1

  • Enterprise Edition:

    • 0 to 90 for permanent databases

    • 0 or 1 for transient databases

Default:

  • Standard Edition: 1

  • Enterprise Edition (or higher): 1 (unless a different default value was specified at the account level)

Note

A value of 0 effectively disables Time Travel for the database.

MAX_DATA_EXTENSION_TIME_IN_DAYS = integer

Object parameter that specifies the maximum number of days for which Snowflake can extend the data retention period for tables in the database to prevent streams on the tables from becoming stale.

For a detailed description of this parameter, see MAX_DATA_EXTENSION_TIME_IN_DAYS.

EXTERNAL_VOLUME = external_volume_name

Object parameter that specifies the default external volume to use for Apache Icebergâ„¢ tables.

For more information about this parameter, see EXTERNAL_VOLUME.

CATALOG = catalog_integration_name

Object parameter that specifies the default catalog integration to use for Apache Icebergâ„¢ tables.

For more information about this parameter, see CATALOG.

REPLACE_INVALID_CHARACTERS = { TRUE | FALSE }

Specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (�) in query results for an Iceberg table. You can only set this parameter for tables that use an external Iceberg catalog.

  • TRUE replaces invalid UTF-8 characters with the Unicode replacement character.

  • FALSE leaves invalid UTF-8 characters unchanged. Snowflake returns a user error message when it encounters invalid UTF-8 characters in a Parquet data file.

Default: FALSE

DEFAULT_DDL_COLLATION = 'collation_specification'

Specifies a default collation specification for all schemas and tables added to the database. The default can be overridden at the schema and individual table level.

For more details about the parameter, see DEFAULT_DDL_COLLATION.

STORAGE_SERIALIZATION_POLICY = { COMPATIBLE | OPTIMIZED }

Specifies the storage serialization policy for Apache Icebergâ„¢ tables that use Snowflake as the catalog.

  • COMPATIBLE: Snowflake performs encoding and compression of data files that ensures interoperability with third-party compute engines.

  • OPTIMIZED: Snowflake performs encoding and compression of data files that ensures the best table performance within Snowflake.

Default: OPTIMIZED

COMMENT = 'string_literal'

Specifies a comment for the database.

Default: No value

TAG ( tag_name = 'tag_value' [ , tag_name = 'tag_value' , ... ] )

Specifies the tag name and the tag string value.

The tag value is always a string, and the maximum number of characters for the tag value is 256.

For information about specifying tags in a statement, see Tag quotas for objects and columns.

Access control requirements¶

A role used to execute this SQL command must have the following privileges at a minimum:

Privilege

Object

Notes

CREATE DATABASE

Account

Only the SYSADMIN role, or a higher role, has this privilege by default. The privilege can be granted to additional roles as needed.

USAGE

External volume, catalog integration

Required if setting the EXTERNAL_VOLUME or CATALOG object parameters, respectively.

For instructions on creating a custom role with a specified set of privileges, see Creating custom roles.

For general information about roles and privilege grants for performing SQL actions on securable objects, see Overview of Access Control.

General usage notes¶

  • Creating a database automatically sets it as the active/current database for the current session (equivalent to using the USE DATABASE command for the database).

  • If a database with the same name already exists, an error is returned and the database is not created, unless the optional OR REPLACE keyword is specified in the command.

    Important

    Using OR REPLACE is the equivalent of using DROP DATABASE on the existing database and then creating a new database with the same name; however, the dropped database is not permanently removed from the system. Instead, it is retained in Time Travel. This is important because dropped databases in Time Travel contribute to data storage for your account. For more information, see Storage Costs for Time Travel and Fail-safe.

  • CREATE OR REPLACE <object> statements are atomic. That is, when an object is replaced, the old object is deleted and the new object is created in a single transaction.

  • Creating a new database automatically creates two schemas in the database:

    • PUBLIC: Default schema for the database.

    • INFORMATION_SCHEMA: Schema which contains views and table functions that can be used for querying metadata about the objects in the database, as well as across all objects in the account.

  • Databases created from shares differ from standard databases in the following ways:

    • They do not have the PUBLIC or INFORMATION_SCHEMA schemas unless these schemas were explicitly granted to the share.

    • They cannot be cloned.

    • Properties, such as TRANSIENT and DATA_RETENTION_TIME_IN_DAYS, do not apply.

  • When a database is active/current, the PUBLIC schema is also active/current by default unless a different schema is used or the PUBLIC schema has been dropped.

  • Regarding metadata:

    Attention

    Customers should ensure that no personal data (other than for a User object), sensitive data, export-controlled data, or other regulated data is entered as metadata when using the Snowflake service. For more information, see Metadata fields in Snowflake.

Database replication usage notes¶

Attention

Snowflake recommends using the account replication feature to replicate databases.

  • Database replication uses Snowflake-provided compute resources instead of your own virtual warehouse to copy objects and data. However, the STATEMENT_TIMEOUT_IN_SECONDS session/object parameter still controls how long a statement runs before it is canceled. The default value is 172800 (2 days). Because the initial replication of a primary database can take longer than 2 days to complete (depending on the amount of metadata in the database as well as the amount of data in database objects), we recommend increasing the STATEMENT_TIMEOUT_IN_SECONDS value to 604800 (7 days, the maximum value) for the session in which you run the replication operation.

    Run the following ALTER SESSION statement prior to executing the ALTER DATABASE secondary_db_name REFRESH statement in the same session:

    ALTER SESSION SET STATEMENT_TIMEOUT_IN_SECONDS = 604800;
    
    Copy

    Note that the STATEMENT_TIMEOUT_IN_SECONDS parameter also applies to the active warehouse in a session. The parameter honors the lower value set at the session or warehouse level. If you have an active warehouse in the current session, also set STATEMENT_TIMEOUT_IN_SECONDS to 604800 for this warehouse (using ALTER WAREHOUSE).

    For example:

    -- determine the active warehouse in the current session (if any)
    SELECT CURRENT_WAREHOUSE();
    
    +---------------------+
    | CURRENT_WAREHOUSE() |
    |---------------------|
    | MY_WH               |
    +---------------------+
    
    -- change the STATEMENT_TIMEOUT_IN_SECONDS value for the active warehouse
    
    ALTER WAREHOUSE my_wh SET STATEMENT_TIMEOUT_IN_SECONDS = 604800;
    
    Copy

    You can reset the parameter value to the default after the replication operation is completed:

    ALTER WAREHOUSE my_wh UNSET STATEMENT_TIMEOUT_IN_SECONDS;
    
    Copy
  • The preferred method of identifying the account that stores the primary database uses the organization name and account name as the account identifier. If you decide to use the legacy account locator instead, see Account identifiers for replication and failover.

  • The CREATE DATABASE … AS REPLICA command does not support the WITH TAG clause.

    This clause is not supported because the secondary database is read only. If your primary database specifies the WITH TAG clause, remove the clause prior to creating the secondary database. To verify whether your database has the WITH TAG clause, call the GET_DDL function in your Snowflake account and specify the primary database in the function argument. If a tag is set on the database, the function output will include an ALTER DATABASE … SET TAG statement.

    For more information, see Replication and tags.

Examples¶

Create two permanent databases, one with a data retention period of 10 days:

CREATE DATABASE mytestdb;

CREATE DATABASE mytestdb2 DATA_RETENTION_TIME_IN_DAYS = 10;

SHOW DATABASES LIKE 'my%';

+---------------------------------+------------+------------+------------+--------+----------+---------+---------+----------------+
| created_on                      | name       | is_default | is_current | origin | owner    | comment | options | retention_time |
|---------------------------------+------------+------------+------------+--------+----------+---------+---------+----------------|
| Tue, 17 Mar 2016 16:57:04 -0700 | MYTESTDB   | N          | N          |        | PUBLIC   |         |         | 1              |
| Tue, 17 Mar 2016 17:06:32 -0700 | MYTESTDB2  | N          | N          |        | PUBLIC   |         |         | 10             |
+---------------------------------+------------+------------+------------+--------+----------+---------+---------+----------------+
Copy

Create a transient database:

CREATE TRANSIENT DATABASE mytransientdb;

SHOW DATABASES LIKE 'my%';

+---------------------------------+---------------+------------+------------+--------+----------+---------+-----------+----------------+
| created_on                      | name          | is_default | is_current | origin | owner    | comment | options   | retention_time |
|---------------------------------+---------------+------------+------------+--------+----------+---------+-----------+----------------|
| Tue, 17 Mar 2016 16:57:04 -0700 | MYTESTDB      | N          | N          |        | PUBLIC   |         |           | 1              |
| Tue, 17 Mar 2016 17:06:32 -0700 | MYTESTDB2     | N          | N          |        | PUBLIC   |         |           | 10             |
| Tue, 17 Mar 2015 17:07:51 -0700 | MYTRANSIENTDB | N          | N          |        | PUBLIC   |         | TRANSIENT | 1              |
+---------------------------------+---------------+------------+------------+--------+----------+---------+-----------+----------------+
Copy

Create a database from a share provided by account ab67890:

CREATE DATABASE snow_sales FROM SHARE ab67890.sales_s;
Copy

For more detailed examples of creating a database from a share, see Consume shared data.

Database replication examples¶

Attention

Snowflake recommends using the account replication feature to replicate databases.

For an example of creating a replication group to replicate a single database to a target account, see Replicate a single database.