Using block storage volumes with services¶
Snowflake supports these storage volume types for your containerized applications: Snowflake internal stage, local storage, memory storage volumes, and block storage volumes.
Specifying block storage in service specification¶
To create a service that uses block storage, you provide the necessary configuration in the service specification as follows:
Specify the
spec.volumes
field to define the block storage volumes to create.volumes: - name: <name> source: block size: <size in Gi> blockConfig: # optional initialContents: fromSnapshot: <snapshot name> iops: <number-of-operations> throughput: <MiB per second>
The following fields are required:
name
: Name of the volume.source
: Type of the volume. For block storage volume, the value isblock
.size
: Storage capacity of the block storage volume measured in bytes. The value must always be an integer, specified using the Gi unit suffix. For example,5Gi
means5*1024*1024*1024
bytes. The size value can range from1Gi
to16384Gi
.
The following are optional fields:
blockConfig.initialContents.fromSnapshot
: Specifies the name of a snapshot (explained in the following sections) taken from another volume. The snapshot is used to initialize the block volume.The snapshot name can be a fully qualified object identifiers, such as
TUTORIAL_DB.DATA_SCHEMA.MY_SNAPSHOT
. Also, the snapshot name is resolved relative to the database and the schema of the service. For example, if you created your service inTUTORIAL_DB.DATA_SCHEMA
, thenfromSnapshot: MY_SNAPSHOT
is equivalent tofromSnapshot: TUTORIAL_DB.DATA_SCHEMA.MY_SNAPSHOT
.blockConfig.iops
: Specifies the supported peak number of input/output operations per second. Note that the data size per operation is capped at 256 KiB.The supported range for AWS is 3000-16000, with a default of 3000.
The supported range for Azure is 3000-80000, with a default of 3000.
blockConfig.throughput
: Specifies the peak throughput, in MiB/second, to provision for the volume.The supported range for AWS is 125-1000, with a default of 125.
The supported range for Azure is 125-1200, with a default of 125.
For example:
volumes: - name: vol-1 source: block size: 200Gi blockConfig: initialContents: fromSnapshot: snapshot1 iops: 3000 throughput: 125
Specify the
spec.containers.volumeMount
field to describe where in your application containers to mount the block storage volumes. The information you provide in this field is the same for all supported storage volumes.
Access control requirements¶
If you want to use an existing snapshot (fromSnapshot
is in the specification) to initialize the volume, the service’s owner role must have the USAGE privilege on the snapshot.
The service’s owner role must also have the USAGE privilege on the database and schema that contain the snapshot.
Managing snapshots¶
You can take snapshots of your block storage volume and later use the backup as follows:
Use the snapshot backup to restore an existing block storage volume.
Use the snapshot backup as seed data to initialize a new block storage volume when creating a new service.
You should ensure all your updates are flushed to the disk before you take the snapshot.
Snowflake provides the following commands to create and manage snapshots:
In addition, to restore a snapshot on an existing block storage volume, you can execute the ALTER SERVICE … RESTORE VOLUME command. Note that you need to suspend the service before you can restore a snapshot. After restoring a volume, service is automatically resumed.
Block storage costs¶
For more information, see the Snowflake Service Consumption Table.
Example¶
For an example, see Tutorial. The tutorial provides step-by-step instructions to create a service with a block storage volume mounted.
Guidelines and limitations¶
The following restrictions apply on services that use block storage volumes:
General limitations. If you encounter any issues with these limitations, contact your account representative.
The block storage volume size value can range from 1Gi to 16384Gi.
Each service can support up to three block volumes. This refers to
spec.volumes
in the service specification.The total number of block storage volumes per Snowflake account is limited to 10.
The number of snapshots allowed per Snowflake account is 100.
The service using block storage volumes must have the same minimum and maximum number of instances.
After the service is created, the following apply:
You can’t change the number of service instances using the ALTER SERVICE … SET … command when a service is using block storage volumes.
You can’t change the
size
,iops
, orthroughput
fields of block storage volumes.No new block storage volumes can be added, and no existing block storage volumes can be removed.
Block storage volumes are preserved if a service is upgraded, or suspended and resumed. When a service is suspended, the you continues to pay for the volume because it is preserved. After you upgrade or resume a service, Snowflake attaches each block storage volume to the same service instance ID as before.
Block storage volumes are deleted if service is dropped. To preserve data in the volumes, take snapshots of the volumes. You can use the snapshots later to initialize new volumes.
Block storage volumes do not support Tri-Secret Secure and Periodic rekeying. This means that if your account has enabled Tri-Secret Secure or periodic rekeying, while all other Snowflake data will continue to have added security, any images stored in your Snowpark Container Services block storage volumes will not benefit from this added security.
To create a block storage volume in an account with Tri-Secret Secure or periodic rekeying, you must first confirm that you understand and agree to continue without the benefit from this additional security for your block storage volumes. To confirm agreement, an account administrator (user with the ACCOUNTADMIN role) will need to set the account-level parameter ENABLE_TRI_SECRET_AND_REKEY_OPT_OUT_FOR_SPCS_BLOCK_STORAGE to
TRUE
.
IOPS and throughput related guidelines
If your service IO performance is not meeting your expectations and the service is affected by block volume IO or throughput, you might consider increasing IOPS or throughput. In the current implementation, any such changes require you to recreate the service.
You can review these available platform metrics to identify if your service is bottlenecked on block storage:
container.cpu.usage
volume.read.iops
volume.write.iops
volume.read.throughput
volume.write.throughput
For AWS:
The maximum IOPS that can be configured is 500 IOPS per GiB of volume size, to a maximum of 16,000 IOPS. For example, the maximum IOPS of a 10 GiB volume can be 500 * 10 = 5000. Accordingly, note that the maximum IOPS of 16,000 can only be configured if your volume is 32 GiB or larger.
The maximum throughput that can be configured is 1 MiB/second for every 4 IOPS, to a maximum of 1000 MiBs/second. For example, with the default 3000 IOPS you can configure throughput up to 750 MiB/second (3000/4=750).
For Azure:
After a volume size of 6GB, the supported number of IOPS increase by 500 for each GB beyond 6GB (disks-types). The maximum IOPS of a 10GB volume can be 500 * 4 + 3000 = 5000. Accordingly, note that the maximum IOPS of 80,000 can only be configured if your volume is 160 GiB or larger.
After 6 GB, the maximum throughput that can be configured is 0.25 MiB/second for every IOPS, to a maximum of 1200 MiBs/second. For example, with the default 3000 IOPS you can configure throughput up to 750 MiB/second (3000*0.25=750).