Option 3: Configuring AWS IAM User Credentials to Access Amazon S3

This section describes how to configure a security policy for an S3 bucket and access credentials for a specific IAM user to access an external stage in a secure manner.

In this Topic:

Step 1: Configure an S3 Bucket Access Policy

AWS Access Control Requirements

Snowflake requires the following permissions on an S3 bucket and folder to be able to access files in the folder (and any sub-folders):

  • s3:GetObject

  • s3:GetObjectVersion

  • s3:ListBucket

Note

The additional s3:PutObject and s3:DeleteObject permissions are required only if you plan to unload files to the bucket or automatically purge the files after loading them into a table.

As a best practice, Snowflake recommends creating an IAM policy and user for Snowflake access to the S3 bucket. You can then attach the policy to the user and use the security credentials generated by AWS for the user to access files in the bucket.

Creating an IAM Policy

The following step-by-step instructions describe how to configure access permissions for Snowflake in your AWS Management Console so that you can use an S3 bucket to load and unload data:

  1. Log into the AWS Management Console.

  2. From the home dashboard, choose Identity & Access Management (IAM):

    Identity & Access Management in AWS Management Console
  3. Choose Account settings from the left-hand navigation pane.

  4. Expand the Security Token Service Regions list, find the AWS region corresponding to the region where your account is located, and choose Activate if the status is Inactive.

  5. Choose Policies from the left-hand navigation pane.

  6. Click Create Policy:

    Create Policy button on Policies page
  7. Click the JSON tab.

  8. Add the policy document that will allow Snowflake to access the S3 bucket and folder.

    The following policy (in JSON format) provides Snowflake with the required access permissions for the specified bucket and folder path. You can copy and paste the text into the policy editor:

    Note

    Make sure to replace bucket_name and prefix with your actual bucket name and folder path prefix.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                  "s3:PutObject",
                  "s3:GetObject",
                  "s3:GetObjectVersion",
                  "s3:DeleteObject",
                  "s3:DeleteObjectVersion"
                ],
                "Resource": "arn:aws:s3:::<bucket_name>/<prefix>/*"
            },
            {
                "Effect": "Allow",
                "Action": "s3:ListBucket",
                "Resource": "arn:aws:s3:::<bucket_name>",
                "Condition": {
                    "StringLike": {
                        "s3:prefix": [
                            "<prefix>/*"
                        ]
                    }
                }
            }
        ]
    }
    

Important

Setting the "s3:prefix": condition to ["*"] grants access to all prefixes in the specified bucket. If more than 1000 objects exist in the bucket, you could encounter the following error: Access Denied (Status Code: 403; Error Code: AccessDenied).

To avoid the error, remove the condition from the IAM policy:

"Condition": {
      "StringLike": {
          "s3:prefix": [
              "*"
          ]
      }
  }

The policy still grants access to the files in the bucket, but S3 does not return an error if more than 1000 objects exist in the bucket.

  1. Click Review policy.

  2. Enter the policy name (e.g. snowflake_access) and an optional description. Then, click Create policy to create the policy.

    Create Policy button in Review Policy page

Step 2: Create an AWS IAM User

  1. Choose Users from the left-hand navigation pane, then click Add user.

  2. On the Add user page, enter a new user name (e.g. snowflake1). Select Programmatic access as the access type, then click Next:

    Add user page
  3. Click Attach existing policies directly, and select the policy you created earlier. Then click Next:

    Set permissions page
  4. Review the user details, then click Create user.

    Review user details page
  5. Record the access credentials. The easiest way to record them is to click Download Credentials to write them to a file (e.g. credentials.csv)

    Attach policy on the user details page

    Attention

    Once you leave this page, the Secret Access Key will no longer be available anywhere in the AWS console. If you lose the key, you must generate a new set of credentials for the user.

You have now:

  • Created an IAM policy for a bucket.

  • Created an IAM user and generated access credentials for the user.

  • Attached the policy to the user.

With the AWS key and secret key for the S3 bucket, you have the credentials necessary to access your S3 bucket in Snowflake using an external stage.

Step 3: Create an External (i.e. S3) Stage

Create an external stage that references the AWS credentials you created.

Create the stage using the CREATE STAGE command, or you can choose to alter an existing external stage and set the CREDENTIALS option.

Note

Credentials are handled separately from other stage parameters such as ENCRYPTION and FILE_FORMAT. Support for these other parameters is the same regardless of the credentials used to access your external S3 bucket.

For example, set mydb.public as the current database and schema for the user session, and then create a stage named my_S3_stage. In this example, the stage references the S3 bucket and path mybucket/load/files. Files in the S3 bucket are encrypted with server-side encryption (AWS_SSE_KMS):

USE SCHEMA mydb.public;

CREATE OR REPLACE STAGE my_S3_stage
  URL='s3://mybucket/load/files/'
  CREDENTIALS=(AWS_KEY_ID='1a2b3c' AWS_SECRET_KEY='4x5y6z')
  ENCRYPTION=(TYPE='AWS_SSE_KMS' KMS_KEY_ID = 'aws/key');

Next: AWS Data File Encryption