Working with DataFrames in Snowpark Python¶
In Snowpark, the main way in which you query and process data is through a DataFrame. This topic explains how to work with DataFrames.
To retrieve and manipulate data, you use the DataFrame class. A
DataFrame represents a relational dataset that is evaluated lazily: it only executes when a specific action is triggered. In a
sense, a DataFrame is like a query that needs to be evaluated in order to retrieve data.
To retrieve data into a DataFrame:
Construct a DataFrame, specifying the source of the data for the dataset.
For example, you can create a DataFrame to hold data from a table, an external CSV file, from local data, or the execution of a SQL statement.
Specify how the dataset in the DataFrame should be transformed.
For example, you can specify which columns should be selected, how the rows should be filtered, how the results should be sorted and grouped, etc.
Execute the statement to retrieve the data into the DataFrame.
In order to retrieve the data into the DataFrame, you must invoke a method that performs an action (for example, the
collect()method).
The next sections explain these steps in more detail.
Setting up the Examples for this Section¶
Some of the examples of this section use a DataFrame to query a table named sample_product_data. If you want to run these
examples, you can create this table and fill the table with some data by executing the following SQL statements.
You can run the SQL statements using Snowpark Python:
To verify that the table was created, run:
Setting up the Examples in a Python Worksheet¶
To set up and run these examples in a Python worksheet, create the sample table and set up your Python worksheet.
Create a SQL worksheet and run the following:
Create a Python worksheet, setting the same database and schema context as the SQL worksheet that you used to create the
sample_product_datatable.
If you want to use the examples in this topic in a Python worksheet, use the example within the handler function (e.g. main),
and use the Session object that is passed into the function to create DataFrames.
For example, call the table method of the session object to create a DataFrame for a table:
To review the output produced by the function, such as by calling the show method of the DataFrame object, use the Output tab.
To examine the value returned by the function, choose the data type of the return value from Settings » Return type, and use the Results tab:
If your function returns a DataFrame, use the default return type of Table.
If your function returns the
listofRowfrom thecollectmethod of a DataFrame object, use Variant for the return type.If your function returns any other value that can be cast to a string, or if your function does not return a value, use String as the return type.
Refer to Running Python Worksheets for more details.
Constructing a DataFrame¶
To construct a DataFrame, you can use the methods and properties of the Session class. Each of the following
methods constructs a DataFrame from a different type of data source.
You can run these examples in your local development environment
or call them within the main function defined in a Python worksheet.
To create a DataFrame from data in a table, view, or stream, call the
tablemethod:To create a DataFrame from specified values, call the
create_dataframemethod:Create a DataFrame with 4 columns, “a”, “b”, “c” and “d”:
Create another DataFrame with 4 columns, “a”, “b”, “c” and “d”:
Create a DataFrame and specify a schema:
To create a DataFrame containing a range of values, call the
rangemethod:To create a DataFrame to hold the data from a file in a stage, use the
readproperty to get aDataFrameReaderobject. In theDataFrameReaderobject, call the method corresponding to the format of the data in the file:To create a DataFrame to hold the results of a SQL query, call the
sqlmethod:
It is possible to use the sql method to execute SELECT statements that retrieve data from tables and staged files,
but using the table method and read property offer better syntax highlighting, error highlighting, and
intelligent code completion in development tools.
Specifying How the Dataset Should Be Transformed¶
To specify which columns to select and how to filter, sort, group, etc. results, call the DataFrame methods that transform the dataset.
To identify columns in these methods, use the col function or an expression that
evaluates to a column. Refer to Specifying Columns and Expressions.
For example:
To specify which rows should be returned, call the
filtermethod:To specify the columns that should be selected, call the
selectmethod:You can also reference columns like this:
Each method returns a new DataFrame object that has been transformed. The method does not affect the original DataFrame object. If you want to apply multiple transformations, you can chain method calls, calling each subsequent transformation method on the new DataFrame object returned by the previous method call.
These transformation methods specify how to construct the SQL statement and do not retrieve data from the Snowflake database. The action methods described in Performing an Action to Evaluate a DataFrame perform the data retrieval.
Joining DataFrames¶
To join DataFrame objects, call the join method:
If both DataFrames have the same column to join on, you can use the following example syntax:
You can also use the & operator to connect join expressions:
If you want to perform a self-join, you must copy the DataFrame:
When there are overlapping columns in the DataFrames, Snowpark prepends a randomly generated prefix to the columns in the join result:
You can rename the overlapping columns using Column.alias:
To avoid random prefixes, you can also specify a suffix to append to the overlapping columns:
These examples use DataFrame.col to specify the columns to use in the join.
Refer to Specifying Columns and Expressions for more ways to specify columns.
If you need to join a table with itself on different columns, you cannot perform the self-join with a single DataFrame. The
following examples use a single DataFrame to perform a self-join, which fails because the column expressions for "id" are
present in the left and right sides of the join:
Instead, use Python’s builtin copy() method to create a clone of the DataFrame object, and use the two DataFrame
objects to perform the join:
Specifying Columns and Expressions¶
When calling these transformation methods, you might need to specify columns or expressions that use columns. For example, when
calling the select method, you need to specify the columns to select.
To refer to a column, create a Column object by calling the col function in the
snowflake.snowpark.functions module.
Note
To create a Column object for a literal, refer to Using Literals as Column Objects.
When specifying a filter, projection, join condition, etc., you can use Column objects in an expression. For example:
You can use
Columnobjects with thefiltermethod to specify a filter condition:You can use
Columnobjects with theselectmethod to define an alias:You can use
Columnobjects with thejoinmethod to define a join condition:
When referring to columns in two different DataFrame objects that have the same name (for example, joining the DataFrames on that
column), you can use the DataFrame.col method in one DataFrame object to refer to a column in that object (for example,
df1.col("name") and df2.col("name")).
The following example demonstrates how to use the DataFrame.col method to refer to a column in a specific DataFrame. The
example joins two DataFrame objects that both have a column named key. The example uses the Column.as method to change
the names of the columns in the newly created DataFrame.
Using Double Quotes Around Object Identifiers (Table Names, Column Names, etc.)¶
The names of databases, schemas, tables, and stages that you specify must conform to the Snowflake identifier requirements.
Create a table that has case-sensitive columns:
Then add values to the table:
Then create a DataFrame for the table and query the table:
When you specify a name, Snowflake considers the name to be in upper case. For example, the following calls are equivalent:
If the name does not conform to the identifier requirements, you must use double quotes (") around the name. Use a backslash
(\) to escape the double quote character within a string literal. For example, the following table name does not start
with a letter or an underscore, so you must use double quotes around the name:
Alternatively, you can use single quotes instead of backslashes to escape the double quote character within a string literal.
Note that when specifying the name of a Column, you don’t need to use double quotes around the name. The Snowpark library automatically encloses the column name in double quotes for you if the name does not comply with the identifier requirements:
As another example, the following calls are equivalent:
If you have already added double quotes around a column name, the library does not insert additional double quotes around the name.
In some cases, the column name might contain double quote characters:
As explained in Identifier requirements, for each double quote character within a double-quoted identifier, you
must use two double quote characters (e.g. "name_with_""air""_quotes" and """column_name_quoted"""):
When an identifier is enclosed in double quotes (whether you explicitly added the quotes or the library added the quotes for you), Snowflake treats the identifier as case-sensitive:
Compared with this example:
Using Literals as Column Objects¶
To use a literal in a method that takes a Column object as an argument, create a Column object for the literal by passing
the literal to the lit function in the snowflake.snowpark.functions module. For example:
Casting a Column Object to a Specific Type¶
To cast a Column object to a specific type, call the cast method, and pass in a type object from the
snowflake.snowpark.types module. For example, to cast a literal
as a NUMBER with a precision of 5 and a scale of 2:
Chaining Method Calls¶
Because each method that transforms a DataFrame object returns a new DataFrame object that has the transformation applied, you can chain method calls to produce a new DataFrame that is transformed in additional ways.
The following example returns a DataFrame that is configured to:
Query the
sample_product_datatable.Return the row with
id = 1.Select the
nameandserial_numbercolumns.
In this example:
session.table("sample_product_data")returns a DataFrame for thesample_product_datatable.Although the DataFrame does not yet contain the data from the table, the object does contain the definitions of the columns in the table.
filter(col("id") == 1)returns a DataFrame for thesample_product_datatable that is set up to return the row withid = 1.Note that the DataFrame does not yet contain the matching row from the table. The matching row is not retrieved until you call an action method.
select(col("name"), col("serial_number"))returns a DataFrame that contains thenameandserial_numbercolumns for the row in thesample_product_datatable that hasid = 1.
The order of calls is important when you chain method calls. Each method call returns a DataFrame that has been transformed. Make sure that subsequent calls work with the transformed DataFrame.
When using Snowpark Python, you might need to make the select and filter method calls in a different order than you would
use the equivalent keywords (SELECT and WHERE) in a SQL statement.
Retrieving Column Definitions¶
To retrieve the definition of the columns in the dataset for the DataFrame, call the schema property. This method returns
a StructType object that contains an list of StructField objects. Each StructField object
contains the definition of a column.
In the returned StructType object, the column names are always normalized. Unquoted identifiers are returned in uppercase,
and quoted identifiers are returned in the exact case in which they were defined.
The following example creates a DataFrame containing the columns named ID and 3rd. For the column name 3rd, the
Snowpark library automatically encloses the name in double quotes ("3rd") because
the name does not comply with the requirements for an identifier.
The example calls the schema property and then calls the names property on the returned StructType object to
get a list of column names. The names are normalized in the StructType returned by the schema property.
Performing an Action to Evaluate a DataFrame¶
As mentioned earlier, the DataFrame is lazily evaluated, which means the SQL statement isn’t sent to the server for execution until you perform an action. An action causes the DataFrame to be evaluated and sends the corresponding SQL statement to the server for execution.
The following methods perform an action:
Class |
Method |
Description |
|---|---|---|
|
|
Evaluates the DataFrame and returns the resulting dataset as an |
|
|
Evaluates the DataFrame and returns the number of rows. |
|
|
Evaluates the DataFrame and prints the rows to the console. This method limits the number of rows to 10 (by default). |
|
|
Saves the data in the DataFrame to the specified table. Refer to Saving Data to a Table. |
For example, to execute a query against a table and return the results, call the collect method:
To execute the query and return the number of results, call the count method:
To execute a query and print the results to the console, call the show method:
To limit the number of rows to 20:
Note
If you call the schema property to get the definitions of the columns in the DataFrame, you do not need to
call an action method.
Saving Data to a Table¶
To save the contents of a DataFrame to a table:
Call the
writeproperty to get aDataFrameWriterobject.Call the
modemethod in theDataFrameWriterobject and specify the mode. For more information, see the API documentation. This method returns a newDataFrameWriterobject that is configured with the specified mode.Call the
save_as_tablemethod in theDataFrameWriterobject to save the contents of the DataFrame to a specified table.
Note that you do not need to call a separate method (e.g. collect) to execute the SQL statement that saves the data to the
table.
For example:
Creating a View From a DataFrame¶
To create a view from a DataFrame, call the create_or_replace_view method, which immediately creates the new view:
In a Python worksheet, because you run the worksheet in the context of a database and schema, you can run the following to create a view:
Views that you create by calling create_or_replace_view are persistent. If you no longer need that view, you can
drop the view manually.
Alternatively, use the create_or_replace_temp_view method, which creates a temporary view.
The temporary view is only available in the session in which it is created.
Working With Files in a Stage¶
This section explains how to query data in a file in a Snowflake stage. For other operations on files, use SQL statements.
To query data in files in a Snowflake stage, use the DataFrameReader class:
Call the
readmethod in theSessionclass to access aDataFrameReaderobject.If the files are in CSV format, describe the fields in the file. To do this:
Create a
StructTypeobject that consists of alistofStructFieldobjects that describe the fields in the file.For each
StructFieldobject, specify the following:The name of the field.
The data type of the field (specified as an object in the
snowflake.snowpark.typesmodule).Whether or not the field is nullable.
For example:
Call the
schemaproperty in theDataFrameReaderobject, passing in theStructTypeobject.For example:
The
schemaproperty returns aDataFrameReaderobject that is configured to read files containing the specified fields.Note that you do not need to do this for files in other formats (such as JSON). For those files, the
DataFrameReadertreats the data as a single field of the VARIANT type with the field name$1.
If you need to specify additional information about how the data should be read (for example, that the data is compressed or that a CSV file uses a semicolon instead of a comma to delimit fields), call the
optionoroptionsmethods of theDataFrameReaderobject.The
optionmethod takes a name and a value of the option that you want to set and lets you combine multiple chained calls whearas theoptionsmethod takes a dictionary of the names of options and their corresponding values.For the names and values of the file format options, see the documentation on CREATE FILE FORMAT.
You can also set the copy options described in the COPY INTO TABLE documentation. Note that setting copy options can result in a more expensive execution strategy when you retrieve the data into the DataFrame.
The following example sets up the
DataFrameReaderobject to query data in a CSV file that is not compressed and that uses a semicolon for the field delimiter.The
optionandoptionsmethods return aDataFrameReaderobject that is configured with the specified options.Call the method corresponding to the format of the file (e.g. the
csvmethod), passing in the location of the file.The methods corresponding to the format of a file return a DataFrame object that is configured to hold the data in that file.
Use the DataFrame object methods to perform any transformations needed on the dataset (for example, selecting specific fields, filtering rows, etc.).
For example, to extract the
colorelement from a JSON file in the stage namedmy_stage:As explained earlier, for files in formats other than CSV (e.g. JSON), the
DataFrameReadertreats the data in the file as a single VARIANT column with the name$1.This example uses the
sql_exprfunction in thesnowflake.snowpark.functionsmodule to specify the path to thecolorelement.Note that the
sql_exprfunction does not interpret or modify the input argument. The function just allows you to construct expressions and snippets in SQL that are not yet supported by the Snowpark API.Call an action method to query the data in the file.
As is the case with DataFrames for tables, the data is not retrieved into the DataFrame until you call an action method.
Working with Semi-Structured Data¶
Using a DataFrame, you can query and access semi-structured data (e.g JSON data). The next sections explain how to work with semi-structured data in a DataFrame.
Note
The examples in these sections use the sample data in Sample Data Used in Examples.
Traversing Semi-Structured Data¶
To refer to a specific field or element in semi-structured data, use the following methods of the Column object:
Get attribute
col_object["<field_name>"]to return aColumnobject for a field in an OBJECT (or a VARIANT that contains an OBJECT).Use
col_object[<index>]to return aColumnobject for an element in an ARRAY (or a VARIANT that contains an ARRAY).
Note
If the field name or elements in the path are irregular and make it difficult to use the indexing described above, you can
use get, get_ignore_case, or get_path as an alternative.
For example, the following code selects the dealership field in objects in the src column of the
sample data:
The code prints the following output:
Note
The values in the DataFrame are surrounded by double quotes because these values are returned as string literals. To cast these values to a specific type, see Explicitly Casting Values in Semi-Structured Data.
You can also chain method calls to traverse a path to a specific field or element.
For example, the following code selects the name field in the salesperson object:
The code prints the following output:
As another example, the following code selects the first element of vehicle field, which holds an array of vehicles. The
example also selects the price field from the first element.
The code prints the following output:
As an alternative to access fields in aforementioned way, you can use get, get_ignore_case, or
get_path functions if the field name or elements in the path are irregular.
For example, the following lines of code both print the value of a specified field in an object:
Similarly, the following lines of code both print the value of a field at a specified path in an object:
Explicitly Casting Values in Semi-Structured Data¶
By default, the values of fields and elements are returned as string literals (including the double quotes), as shown in the examples above.
To avoid unexpected results, call the cast method to cast the value to a specific type. For example, the following code prints out the values without and with casting:
The code prints the following output:
Flattening an Array of Objects into Rows¶
If you need to “flatten” semi-structured data into a DataFrame (e.g. producing a row for every object in an array), call the
flatten using the join_table_function method. This method is equivalent to the FLATTEN SQL function. If you pass in
a path to an object or array, the method returns a DataFrame that contains a row for each field or element in the object or array.
For example, in the sample data, src:customer is an array of objects that
contain information about a customer. Each object contains a name and address field.
If you pass this path to the flatten function:
the method returns a DataFrame:
From this DataFrame, you can select the name and address fields from each object in the VALUE field:
The following code adds to the previous example by casting the values to a specific type and changing the names of the columns:
Executing SQL Statements¶
To execute a SQL statement that you specify, call the sql method in the Session class, and pass in the statement
to be executed. The method returns a DataFrame.
Note that the SQL statement won’t be executed until you call an action method.
If you want to call methods to transform the DataFrame
(e.g. filter, select, etc.),
note that these methods work only if the underlying SQL statement is a SELECT statement. The transformation methods are not
supported for other kinds of SQL statements.
Submit Snowpark queries concurrently¶
Note
This feature requires Snowpark Library for Python version of 1.24 or greater and server version 8.46 or greater.
Thread-safe session objects allow different parts of your Snowpark Python code to run concurrently while using the same session. This enables multiple operations - such as transformations on multiple DataFrames - to be executed concurrently. This is particularly useful when you’re working with queries that can be processed independently on the Snowflake server and it aligns with a more traditional multithreading approach.
The Global Interpreter Lock (GIL) in Python is a mutex that protects access to Python objects, preventing multiple native threads from executing Python bytecode simultaneously. While I/O-bound operations can still benefit from Python’s threading model due to the GIL being released during I/O operations, CPU-bound threads will not achieve true parallelism because only one thread can execute at a time.
Moreover, when used inside Snowflake (e.g. in a stored procedure), the Snowpark Python server manages the Global Interpreter Lock (GIL) by releasing it before submitting queries to Snowflake. This ensures that true concurrency can be achieved when enqueuing multiple queries from separate threads. With this management, Snowpark allows multiple threads to submit queries concurrently, ensuring optimal parallel execution.
Benefits of Using Thread-Safe Session Objects in Snowpark¶
The ability to run multiple DataFrame operations concurrently can bring the following benefits to Snowpark users:
Improved Performance: Thread-safe session objects allow you to run multiple Snowpark Python queries concurrently, reducing overall runtime. For example, if you need to process several tables independently, this feature significantly cuts down the time it takes to complete the job, as you no longer need to wait for each table’s processing to finish before starting the next one.
Efficient Compute Utilization: Submitting queries concurrently ensures that Snowflake’s compute resources are used efficiently, reducing idle times.
Usability: Thread-safe session objects integrate seamlessly with Python’s native multithreading APIs, which allows developers to leverage Python’s built-in tools to control thread behavior and optimize parallel execution.
Thread-safe session objects and async jobs can complement each other depending on your use case. Async jobs are useful when you don’t need to wait for your jobs to finish, allowing for non-blocking execution without thread pool management. Thread-safe session objects, on the other hand, are useful when you want to submit multiple queries concurrently from the client side. In some cases, the code blocks can also contain async jobs, allowing both methods to be used together effectively.
Following are some examples where thread-safe session objects can enhance your data pipeline.
Example 1: Concurrent Loading of Multiple Tables¶
This example demonstrates loading data from three different CSV files into three separate tables using three threads to run the COPY INTO command concurrently.
Example 2: Concurrent Processing of Multiple Tables¶
This example demonstrates how you can use multiple threads to concurrently filter, aggregate, and insert data into a result table from each customer transaction table (transaction_customer1, transaction_customer2, and transaction_customer3).
Limitations of Using Thread-Safe Session Objects¶
If you need to manage multiple transactions concurrently, it’s important to use multiple session objects because multiple threads of a single session do not support concurrent transactions.
Changing session runtime configurations (including Snowflake session variables like database, schema, warehouse, and client side configurations like cte_optimization_enabled, sql_simplifier_enabled) while other threads are active can lead to unexpected behavior. To avoid conflicts, it’s best to use separate session objects if different threads require distinct configurations. For example, if you need to perform operations on different databases in parallel, ensure each thread has its own session object rather than sharing the same session.
Return the Contents of a DataFrame as a Pandas DataFrame¶
To return the contents of a DataFrame as a Pandas DataFrame, use the to_pandas method.
For example:
Snowpark DataFrames vs Snowpark pandas DataFrame: Which should I choose?¶
By installing the Snowpark Python library, you have the option of using the DataFrames API or pandas on Snowflake.
Snowpark DataFrames are modeled after PySpark, while Snowpark pandas is intended to extend the Snowpark DataFrame functionality and provide a familiar interface to pandas users to facilitate easy migration and adoption. We recommend using the different APIs depending on your use case and preference:
Use Snowpark pandas if you …. |
Use Snowpark DataFrames if you … |
|---|---|
Prefer working with or have existing code written in pandas |
Prefer working with or have existing code written in Spark |
Have workflow that involves interactive analysis and iterative exploration |
Have workflow that involves batch processing and limited iterative development |
Are familiar with working with DataFrame operations that get executed immediately |
Are familiar with working with DataFrame operations that are lazily evaluated |
Prefer data being consistent and ordered during the operations |
Are Ok with data not being ordered |
Are Ok with slightly slower performance compared to Snowpark DataFrames in favor of easier to use API |
Performance is more important to you than ease of use |
From an implementation perspective, Snowpark DataFrames and pandas DataFrames are semantically different. Since Snowpark DataFrames are modeled after PySpark, it operates on the original data source, gets the most recent updated data, so it does not maintain order for operations. Snowpark pandas are modeled after pandas, which operate on a snapshot of the data, maintain order during the operation, and allow for order-based positional indexing. Order maintainenace is useful for visual inspection of data in interactive data analysis.
For more information, see Using pandas on Snowflake with Snowpark DataFrames.