CREATE STAGE

Creates a new named internal or external stage to use for loading data from files into Snowflake tables and unloading data from tables into files:

Internal stage:

Stores data files internally within Snowflake. Internal stages can be either permanent or temporary. For more details, see Choosing an internal stage for local files.

External stage:

References data files stored in a location outside of Snowflake. Currently, the following cloud storage services are supported:

  • Amazon S3 buckets

  • Google Cloud Storage buckets

  • Microsoft Azure containers

The storage location can be either private/protected or public.

You cannot access data held in archival cloud storage classes that requires restoration before it can be retrieved. These archival storage classes include, for example, the Amazon S3 Glacier Flexible Retrieval or Glacier Deep Archive storage class, or Microsoft Azure Archive Storage.

An internal or external stage can include a directory table. Directory tables store a catalog of staged files in cloud storage.

See also:

DROP STAGE , ALTER STAGE , SHOW STAGES , DESCRIBE STAGE

PUT , COPY INTO <table>

COPY INTO <location> , GET

Syntax

-- Internal stage
CREATE [ OR REPLACE ] [ { TEMP | TEMPORARY } ] STAGE [ IF NOT EXISTS ] <internal_stage_name>
    internalStageParams
    directoryTableParams
  [ COMMENT = '<string_literal>' ]
  [ [ WITH ] TAG ( <tag_name> = '<tag_value>' [ , <tag_name> = '<tag_value>' , ... ] ) ]

-- External stage
CREATE [ OR REPLACE ] [ { TEMP | TEMPORARY } ] STAGE [ IF NOT EXISTS ] <external_stage_name>
    externalStageParams
    directoryTableParams
  [ COMMENT = '<string_literal>' ]
  [ [ WITH ] TAG ( <tag_name> = '<tag_value>' [ , <tag_name> = '<tag_value>' , ... ] ) ]
Copy

Where:

internalStageParams ::=
  [ ENCRYPTION = (TYPE = 'SNOWFLAKE_FULL' | TYPE = 'SNOWFLAKE_SSE') ]
Copy
externalStageParams (for Amazon S3) ::=
  URL = { 's3://<bucket>[/<path>/]' | 's3gov://<bucket>[/<path>/]' }

  [ { STORAGE_INTEGRATION = <integration_name> } | { CREDENTIALS = ( {  { AWS_KEY_ID = '<string>' AWS_SECRET_KEY = '<string>' [ AWS_TOKEN = '<string>' ] } | AWS_ROLE = '<string>'  } ) ) } ]
  [ ENCRYPTION = ( [ TYPE = 'AWS_CSE' ] [ MASTER_KEY = '<string>' ] |
                   [ TYPE = 'AWS_SSE_S3' ] |
                   [ TYPE = 'AWS_SSE_KMS' [ KMS_KEY_ID = '<string>' ] ] |
                   [ TYPE = 'NONE' ] ) ]
Copy
externalStageParams (for Google Cloud Storage) ::=
  URL = 'gcs://<bucket>[/<path>/]'
  [ STORAGE_INTEGRATION = <integration_name> ]
  [ ENCRYPTION = ( [ TYPE = 'GCS_SSE_KMS' ] [ KMS_KEY_ID = '<string>' ] | [ TYPE = 'NONE' ] ) ]
Copy
externalStageParams (for Microsoft Azure) ::=
  URL = 'azure://<account>.blob.core.windows.net/<container>[/<path>/]'
  [ { STORAGE_INTEGRATION = <integration_name> } | { CREDENTIALS = ( [ AZURE_SAS_TOKEN = '<string>' ] ) } ]
   [ ENCRYPTION = ( [ TYPE = 'AZURE_CSE' ] [ MASTER_KEY = '<string>' ] | [ TYPE = 'NONE' ] ) ]
Copy
externalStageParams (for Amazon S3-compatible Storage) ::=
  URL = 's3compat://{bucket}[/{path}/]'
  ENDPOINT = '<s3_api_compatible_endpoint>'
  [ { CREDENTIALS = ( AWS_KEY_ID = '<string>' AWS_SECRET_KEY = '<string>' ) } ]
Copy
directoryTableParams (for internal stages) ::=
  [ DIRECTORY = ( ENABLE = { TRUE | FALSE }
                  [ REFRESH_ON_CREATE =  { TRUE | FALSE } ] ) ]
Copy
directoryTableParams (for Amazon S3) ::=
  [ DIRECTORY = ( ENABLE = { TRUE | FALSE }
                  [ REFRESH_ON_CREATE =  { TRUE | FALSE } ]
                  [ AUTO_REFRESH = { TRUE | FALSE } ] ) ]
Copy
directoryTableParams (for Google Cloud Storage) ::=
  [ DIRECTORY = ( ENABLE = { TRUE | FALSE }
                  [ AUTO_REFRESH = { TRUE | FALSE } ]
                  [ REFRESH_ON_CREATE =  { TRUE | FALSE } ]
                  [ NOTIFICATION_INTEGRATION = '<notification_integration_name>' ] ) ]
Copy
directoryTableParams (for Microsoft Azure) ::=
  [ DIRECTORY = ( ENABLE = { TRUE | FALSE }
                  [ REFRESH_ON_CREATE =  { TRUE | FALSE } ]
                  [ AUTO_REFRESH = { TRUE | FALSE } ]
                  [ NOTIFICATION_INTEGRATION = '<notification_integration_name>' ] ) ]
Copy

Required parameters

internal_stage_name or . external_stage_name

Specifies the identifier for the stage; must be unique for the schema in which the stage is created.

In addition, the identifier must start with an alphabetic character and cannot contain spaces or special characters unless the entire identifier string is enclosed in double quotes (e.g. "My object"). Identifiers enclosed in double quotes are also case-sensitive.

For more details, see Identifier requirements.

Note

When creating an external stage, a URL is also required. For more details, see External Stage Parameters (in this topic).

If a URL is not specified, Snowflake creates an internal stage by default.

Optional parameters

{ TEMP | TEMPORARY }

Specifies that the stage created is temporary and will be dropped at the end of the session in which it was created. Note:

  • When a temporary external stage is dropped, only the stage itself is dropped; the data files are not removed.

  • When a temporary internal stage is dropped, all of the files in the stage are purged from Snowflake, regardless of their load status. This prevents files in temporary internal stages from using data storage and, consequently, accruing storage charges. However, this also means that the staged files cannot be recovered through Snowflake once the stage is dropped.

    Tip

    If you plan to create and use temporary internal stages, you should maintain copies of your data files outside of Snowflake.

COMMENT = 'string_literal'

Specifies a comment for the stage.

Default: No value

TAG ( tag_name = 'tag_value' [ , tag_name = 'tag_value' , ... ] )

Specifies the tag name and the tag string value.

The tag value is always a string, and the maximum number of characters for the tag value is 256.

For information about specifying tags in a statement, see Tag quotas for objects and columns.

Internal stage parameters (internalStageParams)

[ ENCRYPTION = (TYPE = 'SNOWFLAKE_FULL' | TYPE = 'SNOWFLAKE_SSE') ]

Specifies the type of encryption supported for all files stored on the stage. You cannot change the encryption type after you create the stage.

TYPE = ...

Specifies the encryption type used.

Important

If you require Tri-Secret Secure for security compliance, use the SNOWFLAKE_FULL encryption type for internal stages. SNOWFLAKE_SSE does not support Tri-Secret Secure.

Possible values are:

  • SNOWFLAKE_FULL: Client-side and server-side encryption. The files are encrypted by a client when it uploads them to the internal stage using PUT. Snowflake uses a 128-bit encryption key by default. You can configure a 256-bit key by setting the CLIENT_ENCRYPTION_KEY_SIZE parameter.

    All files are also automatically encrypted using AES-256 strong encryption on the server side.

  • SNOWFLAKE_SSE: Server-side encryption only. The files are encrypted when they arrive on the stage by the cloud service where your Snowflake account is hosted.

    Specify server-side encryption if you plan to query pre-signed URLs for your staged files. For more information, see Types of URLs available to access files.

Default: SNOWFLAKE_FULL

External stage parameters (externalStageParams)

URL = 'cloud_specific_url'

If this parameter is omitted, Snowflake creates an internal stage

Important

  • Enclose the URL in single quotes ('') in order for Snowflake to identify the string. If the quotes are omitted, any credentials you supply may be displayed in plain text in the history. We strongly recommend verifying the syntax of the CREATE STAGE statement before you execute it.

    When you create a stage in the Snowflake web interface, the interface automatically encloses field values in quotation characters, as needed.

  • Append a forward slash (/) to the URL to filter to the specified folder path. If the forward slash is omitted, all files and folders starting with the prefix for the specified path are included.

    Note that the forward slash is required to access and retrieve unstructured data files in the stage.

Amazon S3

URL = { 's3://{bucket}[/{path}/]' | 's3gov://{bucket}[/{path}/]' }

Specifies the URL for the external location (existing S3 bucket) used to store data files for loading/unloading, where:

  • The s3 prefix refers to S3 storage in public AWS regions.

    The s3gov prefix refers to S3 storage in government regions. Note that currently, accessing S3 storage in AWS government regions using a storage integration is limited to Snowflake accounts hosted on AWS in the same government region. Accessing your S3 storage from an account hosted outside of the government region using direct credentials is supported.

  • bucket is the name of the S3 bucket.

  • path is an optional case-sensitive path for files in the cloud storage location (i.e. files have names that begin with a common string) that limits the set of files. Paths are alternatively called prefixes or folders by different cloud storage services.

Google Cloud Storage

URL = 'gcs://bucket[/path/]'

Specifies the URL for the external location (existing GCS bucket) used to store data files for loading/unloading, where:

  • bucket is the name of the GCS bucket.

  • path is an optional case-sensitive path for files in the cloud storage location (i.e. files have names that begin with a common string) that limits the set of files. Paths are alternatively called prefixes or folders by different cloud storage services.

Microsoft Azure

URL = 'azure://account.blob.core.windows.net/container[/path/]'

Specifies the URL for the external location (existing Azure container) used to store data files for loading, where:

  • account is the name of the Azure account (e.g. myaccount). Use the blob.core.windows.net endpoint for all supported types of Azure blob storage accounts, including Data Lake Storage Gen2.

    Note that currently, accessing Azure blob storage in government regions using a storage integration is limited to Snowflake accounts hosted on Azure in the same government region. Accessing your blob storage from an account hosted outside of the government region using direct credentials is supported.

  • container is the name of the Azure container (e.g. mycontainer).

  • path is an optional case-sensitive path for files in the cloud storage location (i.e. files have names that begin with a common string) that limits the set of files. Paths are alternatively called prefixes or folders by different cloud storage services.

Default: No value (an internal stage is created)

STORAGE_INTEGRATION = integration_name or . CREDENTIALS = ( cloud_specific_credentials )

Required only if the storage location is private/protected; not required for public buckets/containers

Amazon S3

STORAGE_INTEGRATION = integration_name

Specifies the name of the storage integration used to delegate authentication responsibility for external cloud storage to a Snowflake identity and access management (IAM) entity. For more details, see CREATE STORAGE INTEGRATION.

Note

  • We highly recommend the use of storage integrations. This option avoids the need to supply cloud storage credentials using the CREDENTIALS parameter when creating stages or loading data.

  • Accessing S3 storage in government regions using a storage integration is limited to Snowflake accounts hosted on AWS in the same government region. Accessing your S3 storage from an account hosted outside of the government region using direct credentials is supported.

CREDENTIALS = ( AWS_KEY_ID = 'string' AWS_SECRET_KEY = 'string' [ AWS_TOKEN = 'string' ] ) or . CREDENTIALS = ( AWS_ROLE = 'string' )

Specifies the security credentials for connecting to AWS and accessing the private/protected S3 bucket where the files to load/unload are staged. For more information, see Configuring secure access to Amazon S3.

The credentials you specify depend on whether you associated the Snowflake access permissions for the bucket with an AWS IAM (Identity & Access Management) user or role:

  • IAM user: IAM credentials are required. Temporary (aka “scoped”) credentials are generated by AWS Security Token Service (STS) and consist of three components:

    • AWS_KEY_ID

    • AWS_SECRET_KEY

    • AWS_TOKEN

    All three are required to access a private/protected bucket. After a designated period of time, temporary credentials expire and can no longer be used. You must then generate a new set of valid temporary credentials.

    Important

    The COPY command also allows permanent (aka “long-term”) credentials to be used; however, for security reasons, Snowflake does not recommend using them. If you must use permanent credentials, Snowflake recommends periodically generating new permanent credentials for external stages.

  • IAM role: Omit the security credentials and access keys and, instead, identify the role using AWS_ROLE and specify the AWS role ARN (Amazon Resource Name).

Google Cloud Storage

STORAGE_INTEGRATION = integration_name

Specifies the name of the storage integration used to delegate authentication responsibility for external cloud storage to a Snowflake identity and access management (IAM) entity. For more details, see CREATE STORAGE INTEGRATION.

Microsoft Azure

STORAGE_INTEGRATION = integration_name

Specifies the name of the storage integration used to delegate authentication responsibility for external cloud storage to a Snowflake identity and access management (IAM) entity. For more details, see CREATE STORAGE INTEGRATION.

Note

  • We highly recommend the use of storage integrations. This option avoids the need to supply cloud storage credentials using the CREDENTIALS parameter when creating stages or loading data.

  • Accessing Azure blob storage in government regions using a storage integration is limited to Snowflake accounts hosted on Azure in the same government region. Accessing your blob storage from an account hosted outside of the government region using direct credentials is supported.

CREDENTIALS = ( AZURE_SAS_TOKEN = 'string' )

Specifies the SAS (shared access signature) token for connecting to Azure and accessing the private/protected container where the files containing loaded data are staged. Credentials are generated by Azure.

Default: No value (no credentials are provided for the external stage)

ENCRYPTION = ( cloud_specific_encryption )

Required only for loading from/unloading into encrypted files; not required if storage location and files are unencrypted

Data loading:

Modifies the encryption settings used to decrypt encrypted files in the storage location and extract data.

Data unloading:

Modifies the encryption settings used to encrypt files unloaded to the storage location.

Amazon S3

ENCRYPTION = ( [ TYPE = 'AWS_CSE' ] [ MASTER_KEY = '<string>' ] | [ TYPE = 'AWS_SSE_S3' ] | [ TYPE = 'AWS_SSE_KMS' [ KMS_KEY_ID = '<string>' ] ] | [ TYPE = 'NONE' ] )

TYPE = ...

Specifies the encryption type used. Possible values are:

  • AWS_CSE: Client-side encryption (requires a MASTER_KEY value). Currently, the client-side master key you provide can only be a symmetric key. Note that, when a MASTER_KEY value is provided, Snowflake assumes TYPE = AWS_CSE (i.e. when a MASTER_KEY value is provided, TYPE is not required).

  • AWS_SSE_S3: Server-side encryption that requires no additional encryption settings.

  • AWS_SSE_KMS: Server-side encryption that accepts an optional KMS_KEY_ID value.

For more information about the encryption types, see the AWS documentation for client-side encryption or server-side encryption.

  • NONE: No encryption.

MASTER_KEY = 'string' (applies to AWS_CSE encryption only)

Specifies the client-side master key used to encrypt the files in the bucket. The master key must be a 128-bit or 256-bit key in Base64-encoded form.

KMS_KEY_ID = 'string' (applies to AWS_SSE_KMS encryption only)

Optionally specifies the ID for the AWS KMS-managed key used to encrypt files unloaded into the bucket. If no value is provided, your default KMS key ID is used to encrypt files on unload.

Note that this value is ignored for data loading.

Google Cloud Storage

ENCRYPTION = ( [ TYPE = 'GCS_SSE_KMS' | 'NONE' ] [ KMS_KEY_ID = 'string' ] )

TYPE = ...

Specifies the encryption type used. Possible values are:

KMS_KEY_ID = 'string' (applies to GCS_SSE_KMS encryption only)

Optionally specifies the ID for the Cloud KMS-managed key that is used to encrypt files unloaded into the bucket. If no value is provided, your default KMS key ID set on the bucket is used to encrypt files on unload.

Note that this value is ignored for data loading. The load operation should succeed if the service account has sufficient permissions to decrypt data in the bucket.

Microsoft Azure

ENCRYPTION = ( [ TYPE = 'AZURE_CSE' | 'NONE' ] [ MASTER_KEY = 'string' ] )

TYPE = ...

Specifies the encryption type used. Possible values are:

  • AZURE_CSE: Client-side encryption (requires a MASTER_KEY value). For information, see the Client-side encryption information in the Microsoft Azure documentation.

  • NONE: No encryption.

MASTER_KEY = 'string' (applies to AZURE_CSE encryption only)

Specifies the client-side master key used to encrypt or decrypt files. The master key must be a 128-bit or 256-bit key in Base64-encoded form.

External stage parameters for Amazon S3-compatible storage (externalStageParams)

URL = 's3compat://bucket[/path/]'

Specifies the URL for the external location (existing bucket accessed using an S3-compatible API endpoint) used to store data files, where:

  • bucket is the name of the bucket.

  • path is an optional case-sensitive path (or prefix in S3 terminology) for files in the cloud storage location (i.e. files with names that begin with a common string).

ENDPOINT = 's3_api_compatible_endpoint'

Fully-qualified domain that points to the S3-compatible API endpoint.

Directory table parameters (directoryTableParams)

ENABLE = TRUE | FALSE

Specifies whether to add a directory table to the stage. When the value is TRUE, a directory table is created with the stage.

Note

Setting this parameter to TRUE is not supported for S3-compatible external stages. The metadata for S3-compatible external stages cannot be refreshed automatically.

Default: FALSE

External stages

Amazon S3

REFRESH_ON_CREATE = TRUE | FALSE

Specifies whether to automatically refresh the directory table metadata once, immediately after the stage is created. Refreshing the directory table metadata synchronizes the metadata with the current list of data files in the specified stage path. This action is required for the metadata to register any existing data files in the named stage specified in the URL = setting.

TRUE

Snowflake automatically refreshes the directory table metadata once after the stage creation.

Note

If the specified cloud storage URL contains close to 1 million files or more, we recommend that you set REFRESH_ON_CREATE = FALSE. After creating the stage, refresh the directory table metadata incrementally by executing ALTER STAGE … REFRESH statements that specify subpaths in the storage location (i.e. subsets of files to include in the refresh) until the metadata includes all of the files in the location.

FALSE

Snowflake does not automatically refresh the directory table metadata. To register any data files that exist in the stage, you must manually refresh the directory table metadata once using ALTER STAGE … REFRESH.

Default: TRUE

AUTO_REFRESH = TRUE | FALSE

Specifies whether Snowflake should enable triggering automatic refreshes of the directory table metadata when new or updated data files are available in the named external stage specified in the URL value.

TRUE

Snowflake enables triggering automatic refreshes of the directory table metadata.

FALSE

Snowflake does not enable triggering automatic refreshes of the directory table metadata. You must manually refresh the directory table metadata periodically using ALTER STAGE … REFRESH to synchronize the metadata with the current list of files in the stage path.

Default: FALSE

Google Cloud Storage

REFRESH_ON_CREATE = TRUE | FALSE

Specifies whether to automatically refresh the directory table metadata once, immediately after the stage is created. Refreshing the directory table metadata synchronizes the metadata with the current list of data files in the specified stage path. This action is required for the metadata to register any existing data files in the named stage specified in the URL = setting.

TRUE

Snowflake automatically refreshes the directory table metadata once after the stage creation.

Note

If the specified cloud storage URL contains close to 1 million files or more, we recommend that you set REFRESH_ON_CREATE = FALSE. After creating the stage, refresh the directory table metadata incrementally by executing ALTER STAGE … REFRESH statements that specify subpaths in the storage location (i.e. subsets of files to include in the refresh) until the metadata includes all of the files in the location.

FALSE

Snowflake does not automatically refresh the directory table metadata. To register any data files that exist in the stage, you must manually refresh the directory table metadata once using ALTER STAGE … REFRESH.

Default: TRUE

AUTO_REFRESH = TRUE | FALSE

Specifies whether Snowflake should enable triggering automatic refreshes of the directory table metadata when new or updated data files are available in the named external stage specified in the [ WITH ] LOCATION = setting.

TRUE

Snowflake enables triggering automatic refreshes of the directory table metadata.

FALSE

Snowflake does not enable triggering automatic refreshes of the directory table metadata. You must manually refresh the directory table metadata periodically using ALTER STAGE … REFRESH to synchronize the metadata with the current list of files in the stage path.

NOTIFICATION_INTEGRATION = '<notification_integration_name>'

Specifies the name of the notification integration used to automatically refresh the directory table metadata using GCS Pub/Sub notifications. A notification integration is a Snowflake object that provides an interface between Snowflake and third-party cloud message queuing services.

Microsoft Azure

REFRESH_ON_CREATE = TRUE | FALSE

Specifies whether to automatically refresh the directory table metadata once, immediately after the stage is created. Refreshing the directory table metadata synchronizes the metadata with the current list of data files in the specified stage path. This action is required for the metadata to register any existing data files in the named stage specified in the URL = setting.

TRUE

Snowflake automatically refreshes the directory table metadata once after the stage creation.

Note

If the specified cloud storage URL contains close to 1 million files or more, we recommend that you set REFRESH_ON_CREATE = FALSE. After creating the stage, refresh the directory table metadata incrementally by executing ALTER STAGE … REFRESH statements that specify subpaths in the storage location (i.e. subsets of files to include in the refresh) until the metadata includes all of the files in the location.

FALSE

Snowflake does not automatically refresh the directory table metadata. To register any data files that exist in the stage, you must manually refresh the directory table metadata once using ALTER STAGE … REFRESH.

Default: TRUE

AUTO_REFRESH = TRUE | FALSE

Specifies whether Snowflake should enable triggering automatic refreshes of the directory table metadata when new or updated data files are available in the named external stage specified in the [ WITH ] LOCATION = setting.

TRUE

Snowflake enables triggering automatic refreshes of the directory table metadata.

FALSE

Snowflake does not enable triggering automatic refreshes of the directory table metadata. You must manually refresh the directory table metadata periodically using ALTER STAGE … REFRESH to synchronize the metadata with the current list of files in the stage path.

Default: FALSE

NOTIFICATION_INTEGRATION = '<notification_integration_name>'

Specifies the name of the notification integration used to automatically refresh the directory table metadata using Azure Event Grid notifications. A notification integration is a Snowflake object that provides an interface between Snowflake and third-party cloud message queuing services.

Note

Do not specify file format options or copy options using the CREATE STAGE or CREATE TABLE commands. It is recommended that you use the COPY INTO <table> command to specify file format options or copy options.

Access control requirements

A role used to execute this SQL command must have the following privileges at a minimum:

Privilege

Object

Notes

USAGE

Storage integration

Required only if accessing a cloud storage service using a storage integration.

CREATE STAGE

Schema

OWNERSHIP

Stage

A role must be granted or inherit the OWNERSHIP privilege on the object to create a temporary object that has the same name as the object that already exists in the schema.

Note that in a managed access schema, only the schema owner (i.e. the role with the OWNERSHIP privilege on the schema) or a role with the MANAGE GRANTS privilege can grant or revoke privileges on objects in the schema, including future grants.

Note that operating on any object in a schema also requires the USAGE privilege on the parent database and schema.

For instructions on creating a custom role with a specified set of privileges, see Creating custom roles.

For general information about roles and privilege grants for performing SQL actions on securable objects, see Overview of Access Control.

Usage notes

Important

If you require Tri-Secret Secure for security compliance, use the SNOWFLAKE_FULL encryption type for internal stages. SNOWFLAKE_SSE does not support Tri-Secret Secure.

Caution

Recreating a stage (using CREATE OR REPLACE STAGE) has the following additional, potentially undesirable, outcomes:

  • The existing directory table for the stage, if any, is dropped. If the stage is recreated with a directory table, the directory is empty by default.

  • The association breaks between the stage and any external table that references it.

    This is because an external table links to a stage using a hidden ID rather than the name of the stage. Behind the scenes, the CREATE OR REPLACE syntax drops an object and recreates it with a different hidden ID.

    If you must recreate a stage after it has been linked to one or more external tables, you must recreate each of the external tables (using CREATE OR REPLACE EXTERNAL TABLE) to reestablish the association. Call the GET_DDL function to retrieve a DDL statement to recreate each of the external tables.

  • Any pipes that reference the stage stop loading data. The execution status of the pipes changes to STOPPED_STAGE_DROPPED. To resume loading data, these pipe objects must be recreated (using the CREATE OR REPLACE PIPE syntax).

  • CREATE STAGE does not check whether the specified URL or credentials are valid. If the credentials are not valid, when you attempt to use the stage, the system returns an error.

  • Regarding metadata:

    Attention

    Customers should ensure that no personal data (other than for a User object), sensitive data, export-controlled data, or other regulated data is entered as metadata when using the Snowflake service. For more information, see Metadata Fields in Snowflake.

Examples

Internal stages

Create an internal stage and specify server-side encryption for the stage:

CREATE STAGE my_int_stage
  ENCRYPTION = (TYPE = 'SNOWFLAKE_SSE');
Copy

Create a temporary internal stage with all the same properties as the previous example:

CREATE TEMPORARY STAGE my_temp_int_stage;
Copy

External stages

Amazon S3

Create an external stage using a private/protected S3 bucket named load with a folder path named files. Secure access to the S3 bucket is provided via the myint storage integration:

CREATE STAGE my_ext_stage
  URL='s3://load/files/'
  STORAGE_INTEGRATION = myint;
Copy

Create an external stage using a private/protected S3 bucket named load with a folder path named files. The Snowflake access permissions for the S3 bucket are associated with an IAM user; therefore, IAM credentials are required:

CREATE STAGE my_ext_stage1
  URL='s3://load/files/'
  CREDENTIALS=(AWS_KEY_ID='1a2b3c' AWS_SECRET_KEY='4x5y6z');
Copy

Note that the AWS_KEY_ID and AWS_SECRET_KEY values used in this example are for illustration purposes only.

Create an external stage using an S3 bucket named load with a folder path named encrypted_files and client-side encryption (default encryption type) with the master key to decrypt/encrypt files stored in the bucket:

CREATE STAGE my_ext_stage2
  URL='s3://load/encrypted_files/'
  CREDENTIALS=(AWS_KEY_ID='1a2b3c' AWS_SECRET_KEY='4x5y6z')
  ENCRYPTION=(MASTER_KEY = 'eSx...');
Copy

Create an external stage using an S3 bucket named load with a folder path named encrypted_files and AWS_SSE_KMS server-side encryption with the ID for the master key to decrypt/encrypt files stored in the bucket:

CREATE STAGE my_ext_stage3
  URL='s3://load/encrypted_files/'
  CREDENTIALS=(AWS_KEY_ID='1a2b3c' AWS_SECRET_KEY='4x5y6z')
  ENCRYPTION=(TYPE='AWS_SSE_KMS' KMS_KEY_ID = 'aws/key');
Copy

Same example as the immediately preceding example, except that the Snowflake access permissions for the S3 bucket as associated with an IAM role instead of an IAM user. Note that credentials are handled separately from other stage parameters such as ENCRYPTION. Support for these other parameters is the same regardless of the credentials used to access your external S3 bucket:

CREATE STAGE my_ext_stage3
  URL='s3://load/encrypted_files/'
  CREDENTIALS=(AWS_ROLE='arn:aws:iam::001234567890:role/mysnowflakerole')
  ENCRYPTION=(TYPE='AWS_SSE_KMS' KMS_KEY_ID = 'aws/key');
Copy

Create a stage with a directory table in the active schema for the user session. The cloud storage URL includes the path files. The stage references a storage integration named my_storage_int:

CREATE STAGE mystage
  URL='s3://load/files/'
  STORAGE_INTEGRATION = my_storage_int
  DIRECTORY = (
    ENABLE = true
    AUTO_REFRESH = true
  );
Copy

Google Cloud Storage

Create an external stage using a private/protected GCS bucket named load with a folder path named files. Secure access to the GCS bucket is provided via the myint storage integration:

CREATE STAGE my_ext_stage
  URL='gcs://load/files/'
  STORAGE_INTEGRATION = myint;
Copy

Create a stage named mystage with a directory table in the active schema for the user session. The cloud storage URL includes the path files. The stage references a storage integration named my_storage_int:

CREATE STAGE mystage
  URL='gcs://load/files/'
  STORAGE_INTEGRATION = my_storage_int
  DIRECTORY = (
    ENABLE = true
    AUTO_REFRESH = true
    NOTIFICATION_INTEGRATION = 'MY_NOTIFICATION_INT'
  );
Copy

Microsoft Azure

Create an external stage using a private/protected Azure container named load with a folder path named files. Secure access to the container is provided via the myint storage integration:

CREATE STAGE my_ext_stage
  URL='azure://myaccount.blob.core.windows.net/load/files/'
  STORAGE_INTEGRATION = myint;
Copy

Create an external stage using an Azure storage account named myaccount and a container named mycontainer with a folder path named files and client-side encryption enabled:

CREATE STAGE mystage
  URL='azure://myaccount.blob.core.windows.net/mycontainer/files/'
  CREDENTIALS=(AZURE_SAS_TOKEN='?sv=2016-05-31&ss=b&srt=sco&sp=rwdl&se=2018-06-27T10:05:50Z&st=2017-06-27T02:05:50Z&spr=https,http&sig=bgqQwoXwxzuD2GJfagRg7VOS8hzNr3QLT7rhS8OFRLQ%3D')
  ENCRYPTION=(TYPE='AZURE_CSE' MASTER_KEY = 'kPx...');
Copy

(The AZURE_SAS_TOKEN and MASTER_KEY values used in this example are not actual values; they are provided for illustration purposes only.)

Create a stage with a directory table in the active schema for the user session. The cloud storage URL includes the path files. The stage references a storage integration named my_storage_int:

CREATE STAGE mystage
  URL='azure://myaccount.blob.core.windows.net/load/files/'
  STORAGE_INTEGRATION = my_storage_int
  DIRECTORY = (
    ENABLE = true
    AUTO_REFRESH = true
    NOTIFICATION_INTEGRATION = 'MY_NOTIFICATION_INT'
  );
Copy