snowflake.core.stage.StageResource¶
- class snowflake.core.stage.StageResource(name: str, collection: StageCollection)¶
Bases:
SchemaObjectReferenceMixin
[StageCollection
]Represents a reference to a Snowflake stage.
With this stage reference, you can drop, list files, put files, get files, and fetch information about stages.
Attributes
- database¶
- fully_qualified_name¶
- root¶
Methods
- delete() None ¶
The delete method is deprecated; use drop instead.
- download_file(stage_path: str, file_folder_path: str) None ¶
The download_file method is deprecated; use get instead.
- drop(if_exists: bool | None = None) None ¶
Drop this stage.
- Parameters:
if_exists (bool, optional) – Check the existence of this stage before suspending it. Default is
None
, which is equivalent toFalse
.
Examples
Dropping a stage using its reference:
>>> stage_reference.drop()
- fetch() Stage ¶
Fetch the details of a stage.
Examples
Fetching a reference to a stage to print its name:
>>> my_stage = stage_reference.fetch() >>> print(my_stage.name)
- get(stage_location: str, target_directory: str | PathLike, *, parallel: int = 4, pattern: str | None = None) None ¶
Download the specified files from a path in the stage to a local directory.
References: Snowflake GET command.
- Parameters:
stage_location (str) – A directory or filename on a stage, from which you want to download the files. e.g.
/folder/file_name.txt
or/folder
target_directory (str,
PathLike
) – The path to the local directory where the files should be downloaded. Iftarget_directory
does not already exist, the method creates the directory.parallel (int, optional) – Specifies the number of threads to use for downloading the files. The granularity unit for downloading is one file. Increasing the number of threads might improve performance when downloading large files. Valid values: Any integer value from 1 (no parallelism) to 99 (use 99 threads for downloading files).
pattern (str, optional) – Specifies a regular expression pattern for filtering files to download. The command lists all files in the specified path and applies the regular expression pattern on each of the files found. Default:
None
(all files in the specified stage are downloaded).
Examples
Getting file from stage:
>>> stage_reference.get("/folder/file_name.txt", "/local_folder")
Getting files with a specific pattern:
>>> stage_reference.get("/folder", "/local_folder", pattern=".*.txt")
- list_files(*, pattern: str | None = None) Iterator[StageFile] ¶
List files in the stage, filtering on any optional ‘pattern’.
- Parameters:
pattern (str, optional) – Specifies a regular expression pattern for filtering files from the output.
Examples
Listing all files in the stage:
>>> files = stage_reference.list_files()
Listing files with a specific pattern:
>>> files = stage_reference.list_files(pattern=".*.txt")
Using a for loop to retrieve information from iterator:
>>> for file in files: ... print(file.name)
- put(local_file_name: str | PathLike, stage_location: str, *, parallel: int = 4, auto_compress: bool = True, source_compression: str = 'AUTO_DETECT', overwrite: bool = False) None ¶
Upload local files to a path in the stage.
References: Snowflake PUT command.
- Parameters:
local_file_name (str,
PathLike
) – The path to the local files to upload. To match multiple files in the path, you can specify the wildcard characters*
and?
.stage_location (str) – The prefix where you want to upload the files. e.g.
/folder
or/
parallel (int, optional) –
Specifies the number of threads to use for uploading files. The upload process separates batches of data files by size:
Small files (< 64 MB) are staged in parallel as individual files.
Larger files are automatically split into chunks, staged concurrently, and reassembled in the target stage. A single thread can upload multiple chunks.
Increasing the number of threads can improve performance when uploading large files. Supported values: Any integer value from 1 (no parallelism) to 99 (use 99 threads for uploading files).
auto_compress (boolean, optional) – Specifies whether Snowflake uses gzip to compress files during upload. Default is
True
.source_compression (str, optional) –
Specifies the method of compression used on already-compressed files that are being staged.
Values can be
AUTO_DETECT
,GZIP
,BZ2
,BROTLI
,ZSTD
,DEFLATE
,RAW_DEFLATE
,NONE
, default isAUTO_DETECT
.overwrite (boolean, optional) – Specifies whether Snowflake will overwrite an existing file with the same name during upload. Default is
False
.
Examples
Putting file on stage and compressing it using the stage’s reference:
>>> stage_reference.put("local_file.csv", "/folder", auto_compress=True)
- upload_file(file_path: str, stage_folder_path: str, *, auto_compress: bool = True, overwrite: bool = False) None ¶
The upload_file method is deprecated; use put instead.