Snowpark Container Services: Google Cloud Instance Families

Instance families are grouped by type: general compute, high memory, and GPU accelerated. GCP instance families are available in three types:

  • General Compute (GEN): Best price-performance for general-purpose containerized workloads.

  • High Memory (MEM): High memory-to-vCPU ratio for applications that require large amounts of RAM, such as CPU-based model serving, large-scale in-memory data processing, and vector index serving.

  • GPU Accelerated (GPU): For machine learning training, inference, and AI workloads requiring GPU acceleration.

For pricing information, see the Snowflake Service Consumption Table.

Note

Region availability is subject to change. To retrieve current availability and instance family specifications programmatically, use SHOW COMPUTE POOL INSTANCE FAMILIES.

Current Generation Instance Families

General Compute Instance Families (Current Generation)

Current generation x86 instances offering the best price-performance for general-purpose workloads.

Instance Family

vCPU

Memory (GiB)

Storage (GB)

Bandwidth limit (Gbps)

Node limit

Region Availability

CPU_X64_XS

1

6

100

10.0

500

Available everywhere

CPU_X64_S

3

13

100

10.0

500

Available everywhere

CPU_X64_M

6

28

100

16.0

500

Available everywhere

CPU_X64_SL

14

58

100

32.0

500

Available everywhere

CPU_X64_L

28

116

100

32.0

500

Available everywhere

High Memory Instance Families (Current Generation)

Current generation x86 instances optimized for memory-intensive workloads.

Instance Family

vCPU

Memory (GiB)

Storage (GB)

Bandwidth limit (Gbps)

Node limit

Region Availability

HIGHMEM_X64_S

6

58

100

16.0

500

Available everywhere

HIGHMEM_X64_M

28

240

100

32.0

500

Available everywhere

HIGHMEM_X64_SL

92

654

100

67.0

500

Not available in me-central2

GPU Accelerated Instance Families (Current Generation)

GCP GPU instance families feature two NVIDIA GPU architectures, each suited to different AI and ML workloads.

NVIDIA L4

Ada Lovelace GPU for efficient AI inference and media workloads.

Instance Family

vCPU

Memory (GiB)

Storage (GB)

Bandwidth limit (Gbps)

GPU

GPU Memory per GPU (GB)

Node limit

Region Availability

GPU_GCP_NV_L4_1_24G

6

28

100

16.0

1 NVIDIA L4

24

10

Available everywhere

GPU_GCP_NV_L4_4_24G

44

178

100

50.0

4 NVIDIA L4

24

10

Available everywhere

NVIDIA A100

High-throughput Ampere GPU for large-scale model training and large dataset processing.

Instance Family

vCPU

Memory (GiB)

Storage (GB)

Bandwidth limit (Gbps)

GPU

GPU Memory per GPU (GB)

Node limit

Region Availability

GPU_A100_G1_12

10

77

100

10.0

1 NVIDIA A100

40

On Request

Only available in us-central1 and europe-west4

GPU_A100_G1_48

44

324

100

50.0

4 NVIDIA A100

160

On Request

Only available in us-central1 and europe-west5

GPU_GCP_NV_A100_8_40G

92

654

100

100.0

8 NVIDIA A100

40

On Request

Available only in us-central1 and europe-west4

Previous Generation Instance Families

There are no previous generation instance families on Google Cloud. All listed instance families are current generation.