- FileOperation.put(local_file_name: str, stage_location: str, *, parallel: int = 4, auto_compress: bool = True, source_compression: str = 'AUTO_DETECT', overwrite: bool = False, statement_params: Optional[Dict[str, str]] = None) List[PutResult] ¶
Uploads local files to the stage.
References: Snowflake PUT command.
>>> # Create a temp stage. >>> _ = session.sql("create or replace temp stage mystage").collect() >>> # Upload a file to a stage. >>> put_result = session.file.put("tests/resources/t*.csv", "@mystage/prefix1") >>> put_result.status 'UPLOADED'
local_file_name – The path to the local files to upload. To match multiple files in the path, you can specify the wildcard characters
stage_location – The stage and prefix where you want to upload the files.
Specifies the number of threads to use for uploading files. The upload process separates batches of data files by size:
Small files (< 64 MB compressed or uncompressed) are staged in parallel as individual files.
Larger files are automatically split into chunks, staged concurrently, and reassembled in the target stage. A single thread can upload multiple chunks.
Increasing the number of threads can improve performance when uploading large files. Supported values: Any integer value from 1 (no parallelism) to 99 (use 99 threads for uploading files).
auto_compress – Specifies whether Snowflake uses gzip to compress files during upload.
source_compression – Specifies the method of compression used on already-compressed files that are being staged. Values can be ‘AUTO_DETECT’, ‘GZIP’, ‘BZ2’, ‘BROTLI’, ‘ZSTD’, ‘DEFLATE’, ‘RAW_DEFLATE’, ‘NONE’.
overwrite – Specifies whether Snowflake will overwrite an existing file with the same name during upload.
statement_params – Dictionary of statement level parameters to be set while executing this action.
PutResultinstances, each of which represents the results of an uploaded file.