Working with DataFrames in Snowpark

In Snowpark, the main way in which you query and process data is through a DataFrame. This topic explains how to work with DataFrames.

In this Topic:

To retrieve and manipulate data, you use the DataFrame class. A DataFrame represents a relational dataset that is evaluated lazily: it only executes when a specific action is triggered. In a sense, a DataFrame is like a query that needs to be evaluated in order to retrieve data.

To retrieve data into a DataFrame:

  1. Construct a DataFrame, specifying the source of the data for the dataset.

    For example, you can create a DataFrame to hold data from a table, an external CSV file, or the execution of a SQL statement.

  2. Specify how the dataset in the DataFrame should be transformed.

    For example, you can specify which columns should be selected, how the rows should be filtered, how the results should be sorted and grouped, etc.

  3. Execute the statement to retrieve the data into the DataFrame.

    In order to retrieve the data into the DataFrame, you must invoke a method that performs an action (for example, the collect() method).

The next sections explain these steps in more detail.

Constructing a DataFrame

To construct a DataFrame, you can use methods in the Session class. Each of the following methods constructs a DataFrame from a different type of data source:

  • To create a DataFrame from data in a table, view, or stream, call the table method:

    // Create a DataFrame from the data in the "products" table.
    val dfTable = session.table("products")
    

    Note

    The session.table method returns an Updatable object. Updatable extends DataFrame and provides additional methods for working with data in the table (e.g. methods for updating and deleting data). See Updating, Deleting, and Merging Rows in a Table.

  • To create a DataFrame from a sequence of values, call the createDataFrame method:

    // Create a DataFrame containing a sequence of values.
    // In the DataFrame, name the columns "i" and "s".
    val dfSeq = session.createDataFrame(Seq((1, "one"), (2, "two"))).toDF("i", "s")
    
  • To create a DataFrame containing a range of values, call the range method:

    // Create a DataFrame from a range
    val dfRange = session.range(1, 10, 2)
    
  • To create a DataFrame to hold the data from a file in a stage, call read to get a DataFrameReader object. In the DataFrameReader object, call the method corresponding to the format of the data in the file:

    // Create a DataFrame from data in a stage.
    val dfJson = session.read.json("@mystage2/data1.json")
    
  • To create a DataFrame to hold the results of a SQL query, call the sql method:

    // Create a DataFrame from a SQL query
    val dfSql = session.sql("SELECT name from products")
    

    Note: Although you can use this method to execute SELECT statements that retrieve data from tables and staged files, you should use the table and read methods instead. Methods like table and read can provide better syntax highlighting, error highlighting, and intelligent code completion in development tools.

Specifying How the Dataset Should Be Transformed

To specify which columns should be selected and how the results should be filtered, sorted, grouped, etc., call the DataFrame methods that transform the dataset. To identify columns in these methods, use the col function or an expression that evaluates to a column. (See Specifying Columns and Expressions.)

For example:

  • To specify which rows should be returned, call the filter method:

    // Create a DataFrame for the rows with the ID 1
    // in the "products" table.
    //
    // This example uses the === operator of the Column object to perform an
    // equality check.
    val dfProductIdOne = dfProductInfo.filter(col("id") === 1)
    
  • To specify the columns that should be selected, call the select method:

    // Import the col function from the functions object.
    import com.snowflake.snowpark.functions._
    
    // Create a DataFrame object for the "products" table.
    val dfProductInfo = session.table("products")
    
    // Create a DataFrame that contains the id, name, and serial_number
    // columns.
    val dfProductSerialNo =
        dfProductInfo.select(col("id"), col("name"), col("serial_number"))
    

Each method returns a new DataFrame object that has been transformed. (The method does not affect the original DataFrame object.) This means that if you want to apply multiple transformations, you can chain method calls, calling each subsequent transformation method on the new DataFrame object returned by the previous method call.

Note that these transformation methods do not retrieve data from the Snowflake database. (The action methods described in Performing an Action to Evaluate a DataFrame perform the data retrieval.) The transformation methods simply specify how the SQL statement should be constructed.

Joining DataFrames

To join DataFrame objects, call the join method:

// Create a DataFrame that joins two other DataFrames
val dfJoined = dfLhs.join(dfRhs, dfLhs.col("key") === dfRhs.col("key"))

Note that the example uses the DataFrame.col method to specify the columns to use in the join. See Specifying Columns and Expressions for more about this method.

If you need to join a table with itself on different columns, you cannot perform the self-join with a single DataFrame. The following examples that use a single DataFrame to perform a self-join fail because the column expressions for "id" are present in the left and right sides of the join:

val dfJoined = df.join(df, col("id") === col("parent_id"))

val dfJoined = df.join(df, df("id") === df("parent_id"))

Both of these examples fail with the following exception:

Exception in thread "main" com.snowflake.snowpark.SnowparkClientException:
  Joining a DataFrame to itself can lead to incorrect results due to ambiguity of column references.
  Instead, join this DataFrame to a clone() of itself.

Instead, use the DataFrame.clone() method to create a clone of the DataFrame object, and use the two DataFrame objects to perform the join:

// Create a DataFrame object for the "products" table for the left-hand side of the join.
val dfLhs = session.table("products")
// Clone the DataFrame object to use as the right-hand side of the join.
val dfRhs = dfLhs.clone()

// Create a DataFrame that joins the two DataFrames
// for the "products" table on the "key" column.
val dfJoined = dfLhs.join(dfRhs, dfLhs.col("id") === dfRhs.col("parent_id"))

If you want to perform a self-join on the same column, call the join method that passes in a Seq of column expressions for the USING clause:

// Create a DataFrame that performs a self-join on
// the DataFrame for the "products" table using the "key" column.
val dfJoined = df.join(df, Seq("key"))

Specifying Columns and Expressions

When calling these transformation methods, you might need to specify columns or expressions that use columns. For example, when calling the select method, you need to specify the columns that should be selected.

To refer to a column, create a Column object by calling the col function in the com.snowflake.snowpark.functions object.

// Import the col function from the functions object.
import com.snowflake.snowpark.functions._

val dfProductInfo = session.table("products").select(col("id"), col("name"))

Note

To create a Column object for a literal, see Using Literals as Column Objects.

When specifying a filter, projection, join condition, etc., you can use Column objects in an expression. The following example uses Column objects in expressions to:

  • Retrieve the rows where the value in the id column is 20 and where the sum of the values in the a and b columns is less than 10.

  • Return the value of b multiplied by 10 in the column named c. c is a column alias that is used in the next statement, which joins the DataFrame.

  • Join the DataFrame df with the computed DataFrame dfCompute.

val dfCompute = session.table("T").filter(col("id") === 20).filter((col("a") + col("b")) < 10).select((col("b") * 10) as "c")
val df2 = df.join(dfCompute, col("a") === col("c") && col("a") === col("d"))

When referring to columns in two different DataFrame objects that have the same name (for example, joining the DataFrames on that column), you can use the DataFrame.col method in one DataFrame object to refer to a column in that object (for example, df1.col("name") and df2.col("name")).

The following example demonstrates how to use the DataFrame.col method to refer to a column in a specific DataFrame. The example joins two DataFrame objects that both have a column named key. The example uses the Column.as method to change the names of the columns in the newly created DataFrame.

// Create a DataFrame that joins two other DataFrames (dfLhs and dfRhs).
// Use the DataFrame.col method to refer to the columns used in the join.
val dfJoined = dfLhs.join(dfRhs, dfLhs.col("key") === dfRhs.col("key")).select(dfLhs.col("value").as("L"), dfRhs.col("value").as("R"))

As an alternative to the DataFrame.col method, you can use the DataFrame.apply method to refer to a column in a specific DataFrame. Like the DataFrame.col method, the DataFrame.apply method accepts a column name as input and returns a Column object.

Note that when an object has an apply method in Scala, you can call the apply method by calling the object as if it were a function. For example, to call df.apply("column_name"), you can simply write df("column_name"). The following calls are equivalent:

  • df.col("<column_name>")

  • df.apply("<column_name>")

  • df("<column_name>")

The following example is the same as the previous example but uses the DataFrame.apply method to refer to the columns in a join operation:

// Create a DataFrame that joins two other DataFrames (dfLhs and dfRhs).
// Use the DataFrame.apply method to refer to the columns used in the join.
// Note that dfLhs("key") is shorthand for dfLhs.apply("key").
val dfJoined = dfLhs.join(dfRhs, dfLhs("key") === dfRhs("key")).select(dfLhs("value").as("L"), dfRhs("value").as("R"))

Using Shorthand For a Column Object

As an alternative to using the col function, you can refer to a column in one of these ways:

  • Use a dollar sign in front of the quoted column name ($"column_name").

  • Use an apostrophe (a single quote) in front of the unquoted column name ('column_name).

To do this, import the names from the implicits object after you create a Session object:

val session = Session.builder.configFile("/path/to/properties").create

// Import this after you create the session.
import session.implicits._

// Use the $ (dollar sign) shorthand.
val df = session.table("T").filter($"id" === 10).filter(($"a" + $"b") < 10)

// Use ' (apostrophe) shorthand.
val df = session.table("T").filter('id === 10).filter(('a + 'b) < 10).select('b * 10)

Using Double Quotes Around Object Identifiers (Table Names, Column Names, etc.)

The names of databases, schemas, tables, and stages that you specify must conform to the Snowflake identifier requirements. When you specify a name, Snowflake considers the name to be in upper case. For example, the following calls are equivalent:

// The following calls are equivalent:
df.select(col("id123"))
df.select(col("ID123"))

If the name does not conform to the identifier requirements, you must use double quotes (") around the name. Use a backslash (\) to escape the double quote character within a Scala string literal. For example, the following table name does not start with a letter or an underscore, so you must use double quotes around the name:

val df = session.table("\"10tablename\"")

Note that when specifying the name of a column, you don’t need to use double quotes around the name. The Snowpark library automatically encloses the column name in double quotes for you if the name does not comply with the identifier requirements:.

// The following calls are equivalent:
df.select(col("3rdID"))
df.select(col("\"3rdID\"))

// The following calls are equivalent:
df.select(col("id with space"))
df.select(col("\"id with space\""))

If you have already added double quotes around a column name, the library does not insert additional double quotes around the name.

In some cases, the column name might contain double quote characters:

describe table quoted;
+------------------------+ ...
| name                   | ...
|------------------------+ ...
| name_with_"air"_quotes | ...
| "column_name_quoted"   | ...
+------------------------+ ...

As explained in Identifier Requirements, for each double quote character within a double-quoted identifier, you must use two double quote characters (e.g. "name_with_""air""_quotes" and """column_name_quoted"""):

val dfTable = session.table("quoted")
dfTable.select("\"name_with_\"\"air\"\"_quotes\"").show()
dfTable.select("\"\"\"column_name_quoted\"\"\"").show()

Keep in mind that when an identifier is enclosed in double quotes (whether you explicitly added the quotes or the library added the quotes for you), Snowflake treats the identifier as case-sensitive:

// The following calls are NOT equivalent!
// The Snowpark library adds double quotes around the column name,
// which makes Snowflake treat the column name as case-sensitive.
df.select(col("id with space"))
df.select(col("ID WITH SPACE"))

Using Literals as Column Objects

To use a literal in a method that passes in a Column object, create a Column object for the literal by passing the literal to the lit function in the com.snowflake.snowpark.functions object. For example:

// Import for the lit and col functions.
import com.snowflake.snowpark.functions._

// Show the first 10 rows in which num_items is greater than 5.
// Use `lit(5)` to create a Column object for the literal 5.
df.filter(col("num_items").gt(lit(5))).show()

If the literal is a floating point or double value in Scala (e.g. 0.05 is treated as a Double by default), the Snowpark library generates SQL that implicitly casts the value to the corresponding Snowpark data type (e.g. 0.05::DOUBLE). This can produce an approximate value that differs from the exact number specified.

For example, the following code displays no matching rows, even though the filter (that matches values greater than or equal to 0.05) should match the rows in the DataFrame:

// Create a DataFrame that contains the value 0.05.
val df = session.sql("select 0.05 :: Numeric(5, 2) as a")

// Applying this filter results in no matching rows in the DataFrame.
df.filter(col("a") <= lit(0.06) - lit(0.01)).show()

The problem is that lit(0.06) and lit(0.01) produce approximate values for 0.06 and 0.01, not the exact values.

To avoid this problem, you can use one of the following approaches:

  • Option 1: Cast the literal to the Snowpark type that you want to use. For example, to use a NUMBER with a precision of 5 and a scale of 2:

    df.filter(col("a") <= lit(0.06).cast(new DecimalType(5, 2)) - lit(0.01).cast(new DecimalType(5, 2))).show()
    
  • Option 2: Cast the value to the type that you want to use before passing the value to the lit function. For example, if you want to use the BigDecimal type:

    df.filter(col("a") <= lit(BigDecimal(0.06)) - lit(BigDecimal(0.01))).show()
    

Casting a Column Object to a Specific Type

To cast a column object to a specific type, call the cast method, and pass in a type object from the com.snowflake.snowpark.types package. For example, to cast a literal as a NUMBER with a precision of 5 and a scale of 2:

// Import for the lit function.
import com.snowflake.snowpark.functions._
// Import for the DecimalType class..
import com.snowflake.snowpark.types._

val decimalValue = lit(0.05).cast(new DecimalType(5,2))

Chaining Method Calls

Because each method that transforms a DataFrame object returns a new DataFrame object that has the transformation applied, you can chain method calls to produce a new DataFrame that is transformed in additional ways.

The following example returns a DataFrame that is configured to:

  • Query the products table.

  • Return the row with id = 1.

  • Select the name and serial_number columns.

val dfProductInfo = session.table("products").filter(col("id") === 1).select(col("name"), col("serial_number"))

In this example:

  • session.table("products") returns a DataFrame for the products table.

    Although the DataFrame does not yet contain the data from the table, the object does contain the definitions of the columns in the table.

  • filter(col("id") === 1) returns a DataFrame for the products table that is set up to return the row with id = 1.

    Note again that the DataFrame does not yet contain the matching row from the table. The matching row is not retrieved until you call an action method.

  • select(col("name"), col("serial_number")) returns a DataFrame that contains the name and serial_number columns for the row in the products table that has id = 1.

When you chain method calls, keep in mind that the order of calls is important. Each method call returns a DataFrame that has been transformed. Make sure that subsequent calls work with the transformed DataFrame.

For example, in the code below, the select method returns a DataFrame that just contains two columns: name and serial_number. The filter method call on this DataFrame fails because it uses the id column, which is not in the transformed DataFrame.

// This fails with the error "invalid identifier 'ID'."
val dfProductInfo = session.table("products").select(col("name"), col("serial_number")).filter(col("id") === 1)

In contrast, the following code executes successfully because the filter() method is called on a DataFrame that contains all of the columns in the products table (including the id column):

// This succeeds because the DataFrame returned by the table() method
// includes the id column.
val dfProductInfo = session.table("products").filter(col("id") === 1).select(col("name"), col("serial_number"))

Keep in mind that you might need to make the select and filter method calls in a different order than you would use the equivalent keywords (SELECT and WHERE) in a SQL statement.

Retrieving Column Definitions

To retrieve the definition of the columns in the dataset for the DataFrame, call the schema method. This method returns a StructType object that contains an Array of StructField objects. Each StructField object contains the definition of a column.

// Get the StructType object that describes the columns in the
// underlying rowset.
val dfDefinition = session.table("products").schema

In the returned StructType object, the column names are always normalized. Unquoted identifiers are returned in uppercase, and quoted identifiers are returned in the exact case in which they were defined.

The following example returns a DataFrame containing the columns named ID and 3rd. For the column name 3rd, the Snowpark library automatically encloses the name in double quotes ("3rd") because the name does not comply with the requirements for an identifier.

The example calls the schema method and then calls the names method on the returned StructType object to get a Seq of column names. The names are normalized in the StructType returned by the schema method.

// This returns Seq("ID", "\"3rd\"")
df.select(col("id"), col("3rd")).schema.names.toSeq

Performing an Action to Evaluate a DataFrame

As mentioned earlier, the DataFrame is lazily evaluated, which means the SQL statement isn’t sent to the server for execution until you perform an action. An action causes the DataFrame to be evaluated and sends the corresponding SQL statement to the server for execution.

In this release, the following methods perform an action:

Class

Method

Description

DataFrame

collect

Evaluates the DataFrame and returns the resulting dataset as an Array of Row objects.

DataFrame

count

Evaluates the DataFrame and returns the number of rows.

DataFrame

show

Evaluates the DataFrame and prints the rows to the console. Note that this method limits the number of rows to 10 (by default).

DataFrameWriter

saveAsTable

Saves the data in the DataFrame to the specified table. See Saving Data to a Table.

Updatable

delete

Deletes rows in the specified table. See Updating, Deleting, and Merging Rows in a Table.

Updatable

update

Updates rows in the specified table. See Updating, Deleting, and Merging Rows in a Table.

MergeBuilder

collect

Merges rows into the specified table. See Updating, Deleting, and Merging Rows in a Table.

For example, to execute a query against a table and return the results, call the collect method:

// Create a DataFrame for the row in the "products" table with the id 1.
// This does not execute the query.
val dfProductIdOne = session.table("products").filter(col("id") === 1)

// Send the query to the server for execution and
// return an Array of Rows containing the results.
val results = dfProductIdOne.collect()

To execute the query and return the number of results, call the count method:

// Create a DataFrame for the "products" table.
val dfProducts = session.table("products")

// Send the query to the server for execution and
// return the count of rows in the table.
val resultCount = dfProducts.count()

To execute a query and print the results to the console, call the show method:

// Create a DataFrame for the "products" table.
val dfProducts = session.table("products")

// Send the query to the server for execution and
// print the results to the console.
// The query limits the number of rows to 10 by default.
dfProducts.show()

// Limit the number of rows to 20, rather than 10.
dfProducts.show(20)

Note: If you are calling the schema method to get the definitions of the columns in the DataFrame, you do not need to call an action method.

Updating, Deleting, and Merging Rows in a Table

Note

This feature was introduced in Snowpark 0.7.0.

When you call Session.table to create a DataFrame object for a table, the method returns an Updatable object, which extends DataFrame with additional methods for updating and deleting data in the table. (See Updatable.)

If you need to update or delete rows in a table, you can use the following methods of the Updatable class:

Updating Rows in a Table

For the update method, pass in a Map that associates the columns to update and the corresponding values to assign to those columns. update returns an UpdateResult object, which contains the number of rows that were updated. (See UpdateResult.)

Note

update is an action method, which means that calling the method sends SQL statements to the server for execution.

For example, to replace the values in the column named count with the value 1:

val updatableDf = session.table("products")
val updateResult = updatableDf.update(Map("count" -> lit(1)))
println(s"Number of rows updated: ${updateResult.rowsUpdated}")

The example above uses the name of the column to identify the column. You can also use a column expression:

val updateResult = updatableDf.update(Map(col("count") -> lit(1)))

If the update should be made only when a condition is met, you can specify that condition as an argument. For example, to replace the values in the column named count for rows in which the category_id column has the value 20:

val updateResult = updatableDf.update(Map(col("count") -> lit(1)), col("category_id") === 20)

If you need to base the condition on a join with a different DataFrame object, you can pass that DataFrame in as an argument and use that DataFrame in the condition. For example, to replace the values in the column named count for rows in which the category_id column matches the category_id in the DataFrame dfPart:

val updatableDf = session.table("products")
val dfParts = session.table("parts")
val updateResult = updatableDf.update(Map(col("count") -> lit(1)), updatableDf("category_id") === dfParts("category_id"))

Deleting Rows in a Table

For the delete method, you can specify a condition that identifies the rows to delete, and you can base that condition on a join with another DataFrame. delete returns a DeleteResult object, which contains the number of rows that were deleted. (See DeleteResult.)

Note

delete is an action method, which means that calling the method sends SQL statements to the server for execution.

For example, to delete the rows in which the category_id column matches the category_id in the DataFrame dfPart:

val updatableDf = session.table("products")
val deleteResult = updatableDf.delete(updatableDf("category_id") === dfParts("category_id"))
println(s"Number of rows deleted: ${deleteResult.rowsDeleted}")

Merging Rows into a Table

To insert, update, and deletes rows in one table based on values in a second table or a subquery (the equivalent of the MERGE command in SQL), do the following:

  1. In the Updatable object for the table where you want the data merged in, call the merge method, passing in the DataFrame object for the other table and the column expression for the join condition.

    This returns a MergeBuilder object that you can use to specify the actions to take (e.g. insert, update, or delete) on the rows that match and the rows that don’t match. (See MergeBuilder.)

  2. Using the MergeBuilder object:

    • To specify the update or deletion that should be performed on matching rows, call the whenMatched method.

      If you need to specify an additional condition whe rows should be updated or deleted, you can pass in a column expression for that condition.

      This method returns a MatchedClauseBuilder object that you can use to specify the action to perform. (See MatchedClauseBuilder.)

      Call the update or delete method in the MatchedClauseBuilder object to specify the update or delete action that should be performed on matching rows. These methods return a MergeBuilder object that you can use to specify additional clauses.

    • To specify the insert that should be performed when rows do not match, call the whenNotMatched method.

      If you need to specify an additional condition when rows should be inserted, you can pass in a column expression for that condition.

      This method returns a NotMatchedClauseBuilder object that you can use to specify the action to perform. (See NotMatchedClauseBuilder.)

      Call the insert method in the NotMatchedClauseBuilder object to specify the insert action that should be performed when rows do not match. These methods return a MergeBuilder object that you can use to specify additional clauses.

  3. When you are done specifying the inserts, updates, and deletions that should be performed, call the collect method of the MergeBuilder object to perform the specified inserts, updates, and deletions on the table.

    collect returns a MergeResult object, which contains the number of rows that were inserted, updated, and deleted. (See MergeResult.)

The following example inserts a row with the id and value columns from the source table into the target table if the target table does not contain a row with a matching ID:

val mergeResult = target.merge(source, target("id") === source("id"))
                      .whenNotMatched.insert(Seq(source("id"), source("value")))
                      .collect()

The following example updates a row in the target table with the value of the value column from the row in the source table that has the same ID:

val mergeResult = target.merge(source, target("id") === source("id"))
                      .whenMatched.update(Map("value" -> source("value")))
                      .collect()

Saving Data to a Table

To save the contents of a DataFrame to a table:

  1. Call the write method to get a DataFrameWriter object.

  2. Call the mode method in the DataFrameWriter object and specify whether you want to insert rows or update rows in the table. This method returns a new DataFrameWriter object that is configured with the specified mode.

  3. Call the saveToTable method in the DataFrameWriter object to save the contents of the DataFrame to a specified table.

Note that you do not need to call a separate method (e.g. collect) to execute the SQL statement that saves the data to the table.

For example:

df.write.mode(SaveMode.Overwrite).saveAsTable(tableName)

Creating a View From a DataFrame

To create a view from a DataFrame, call the createOrReplaceView method:

df.createOrReplaceView("db.schema.viewName")

Note that calling createOrReplaceView immediately creates the new view. More importantly, it does not cause the DataFrame to be evaluated. (The DataFrame itself is not evaluated until you perform an action.)

Views that you create by calling createOrReplaceView are persistent. If you no longer need that view, you can drop the view manually.

Working With Files in a Stage

This section explains how to query data in a file in a Snowflake stage. For other operations on files, use SQL statements.

To query data in files in a Snowflake stage, use the DataFrameReader class:

  1. Call the read method in the Session class to access a DataFrameReader object.

  2. If the files are in CSV format, describe the fields in the file. To do this:

    1. Create a StructType object that consists of a sequence of StructField objects that describe the fields in the file.

    2. For each StructField object, specify the following:

      • The name of the field.

      • The data type of the field (specified as an object in the com.snowflake.snowpark.types package).

      • Whether or not the field is nullable.

      For example:

      import com.snowflake.snowpark.types._
      
      val schemaForDataFile = StructType(
          Seq(
              StructField("id", StringType, true),
              StructField("name", StringType, true)))
      
    3. Call the schema method in the DataFrameReader object, passing in the StructType object.

      For example:

      var dfReader = session.read.schema(schemaForDataFile)
      

      The schema method returns a DataFrameReader object that is configured to read files containing the specified fields.

      Note that you do not need to do this for files in other formats (such as JSON). For those files, the DataFrameReader treats the data as a single field of the VARIANT type with the field name $1.

  3. If you need to specify additional information about how the data should be read (for example, that the data is compressed or that a CSV file uses a semicolon instead of a comma to delimit fields), call the options method of the DataFrameReader object.

    Pass in the name and value of the option that you want to set. For the names and values of the file format options, see the documentation on CREATE FILE FORMAT.

    You can also set the copy options described in the COPY INTO TABLE documentation. Note that setting copy options can result in a more expensive execution strategy when you retrieve the data into the DataFrame.

    The following example sets up the DataFrameReader object to query data in a CSV file that is not compressed and that uses a semicolon for the field delimiter.

    dfReader = dfReader.option("field_delimiter", ";").option("COMPRESSION", "NONE")
    

    The options method returns a DataFrameReader object that is configured with the specified options.

  4. Call the method corresponding to the format of the file (e.g. the csv method), passing in the location of the file.

    val df = dfReader.csv("@s3_ts_stage/emails/data_0_0_0.csv")
    

    The methods corresponding to the format of a file return a DataFrame object that is configured to hold the data in that file.

  5. Use the DataFrame object methods to perform any transformations needed on the dataset (for example, selecting specific fields, filtering rows, etc.).

    For example, to extract the color element from a JSON file in the stage named mystage:

    // Import the sqlExpr function from the functions object.
    import com.snowflake.snowpark.functions._
    
    val df = session.read.json("@mystage").select(sqlExpr("$1:color"))
    

    As explained earlier, for files in formats other than CSV (e.g. JSON), the DataFrameReader treats the data in the file as a single VARIANT column with the name $1.

    This example uses the sqlExpr function in the com.snowflake.snowpark.functions object to specify the path to the color element.

    Note that the sqlExpr function does not interpret or modify the input argument. The function just allows you to construct expressions and snippets in SQL that are not yet supported by the Snowpark API.

  6. Call an action method to query the data in the file.

    As is the case with DataFrames for tables, the data is not retrieved into the DataFrame until you call an action method.

Executing SQL Statements

To execute a SQL statement that you specify, call the sql method in the Session class, and pass in the statement to be executed. The method returns a DataFrame.

Note that the SQL statement won’t be executed until you call an action method.

// Get the list of the files in a stage.
// The collect() method causes this SQL statement to be executed.
val stageFilesDf = session.sql("ls @myStage").collect()

// Resume the operation of a warehouse.
// Note that you must call the collect method in order to execute
// the SQL statement.
session.sql("alter warehouse if exists myWarehouse resume if suspended").collect()

val tableDf = session.table("table").select(col("a"), col("b"))
// Get the count of rows from the table.
val numRows = tableDf.count()

// Set up a SQL statement to copy data from a stage to a table.
val copyDf = session.sql("copy into myTable from @myStage file_format=(type = csv)").collect()

If you want to call methods to transform the DataFrame (e.g. filter, select, etc.), note that these methods work only if the underlying SQL statement is a SELECT statement. The transformation methods are not supported for other kinds of SQL statements.

val df = session.sql("select a, c from table where b < 1")
// Because the underlying SQL statement for the DataFrame is a SELECT statement,
// you can call the filter method to transform this DataFrame.
val results = df.filter(col("c") < 10).select(col("a")).collect()

// In this example, the underlying SQL statement is not a SELECT statement.
val df = session.sql("ls @myStage")
// Calling the filter method results in an error.
df.filter(...)