class Session extends Logging
Establishes a connection with a Snowflake database and provides methods for creating DataFrames and accessing objects for working with files in stages.
When you create a
Session
object, you provide configuration settings to establish a
connection with a Snowflake database (e.g. the URL for the account, a user name, etc.). You can
specify these settings in a configuration file or in a Map that associates configuration
setting names with values.
To create a Session from a file:
val session = Session.builder.configFile("/path/to/file.properties").create
To create a Session from a map of configuration properties:
val configMap = Map( "URL" -> "demo.snowflakecomputing.com", "USER" -> "testUser", "PASSWORD" -> "******", "ROLE" -> "myrole", "WAREHOUSE" -> "warehouse1", "DB" -> "db1", "SCHEMA" -> "schema1" ) Session.builder.configs(configMap).create
Session contains functions to construct DataFrame s like Session.table , Session.sql , and Session.read
- Since
-
0.1.0
- Alphabetic
- By Inheritance
- Session
- Logging
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Value Members
-
final
def
!=
(
arg0:
Any
)
:
Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##
()
:
Int
- Definition Classes
- AnyRef → Any
-
final
def
==
(
arg0:
Any
)
:
Boolean
- Definition Classes
- AnyRef → Any
-
def
addDependency
(
path:
String
)
:
Unit
Registers a file in stage or a local file as a dependency of a user-defined function (UDF).
Registers a file in stage or a local file as a dependency of a user-defined function (UDF).
The local file can be a JAR file, a directory, or any other file resource. If you pass the path to a local file to
addDependency
, the Snowpark library uploads the file to a temporary stage and imports the file when executing a UDF.If you pass the path to a file in a stage to
addDependency
, the file is included in the imports when executing a UDF.Note that in most cases, you don't need to add the Snowpark JAR file and the JAR file (or directory) of the currently running application as dependencies. The Snowpark library automatically attempts to detect and upload these JAR files. However, if this automatic detection fails, the Snowpark library reports this in an error message, and you must add these JAR files explicitly by calling
addDependency
.The following example demonstrates how to add dependencies on local files and files in a stage:
session.addDependency("@my_stage/http-commons.jar") session.addDependency("/home/username/lib/language-detector.jar") session.addDependency("./resource-dir/") session.addDependency("./resource.xml")
- path
-
Path to a local directory, local file, or file in a stage.
- Since
-
0.1.0
-
final
def
asInstanceOf
[
T0
]
:
T0
- Definition Classes
- Any
-
def
cancelAll
()
:
Unit
Cancel all action methods that are running currently.
Cancel all action methods that are running currently. This does not affect on any action methods called in the future.
- Since
-
0.5.0
-
def
clone
()
:
AnyRef
- Attributes
- protected[ lang ]
- Definition Classes
- AnyRef
- Annotations
- @throws ( ... ) @native () @HotSpotIntrinsicCandidate ()
-
def
close
()
:
Unit
Close this session.
Close this session.
- Since
-
0.7.0
-
def
createAsyncJob
(
queryID:
String
)
:
AsyncJob
Returns an AsyncJob object that you can use to track the status and get the results of the asynchronous query specified by the query ID.
Returns an AsyncJob object that you can use to track the status and get the results of the asynchronous query specified by the query ID.
For example, create an AsyncJob by specifying a valid
<query_id>
, check whether the query is running or not, and get the result rows.val asyncJob = session.createAsyncJob(<query_id>) println(s"Is query ${asyncJob.getQueryId} running? ${asyncJob.isRunning()}") val rows = asyncJob.getRows()
- queryID
-
A valid query ID
- returns
-
An AsyncJob object
- Since
-
0.11.0
-
def
createDataFrame
(
data:
Array
[
Row
]
,
schema:
StructType
)
:
DataFrame
Creates a new DataFrame that uses the specified schema and contains the specified Row objects.
Creates a new DataFrame that uses the specified schema and contains the specified Row objects.
For example, the following code creates a DataFrame containing two columns of the types
int
andstring
with two rows of data:For example
import com.snowflake.snowpark.types._ ... // Create an array of Row objects containing data. val data = Array(Row(1, "a"), Row(2, "b")) // Define the schema for the columns in the DataFrame. val schema = StructType(Seq(StructField("num", IntegerType), StructField("str", StringType))) // Create the DataFrame. val df = session.createDataFrame(data, schema)
- data
-
An array of Row objects representing rows of data.
- schema
-
StructType representing the schema for the DataFrame.
- returns
- Since
-
0.7.0
-
def
createDataFrame
(
data:
Seq
[
Row
]
,
schema:
StructType
)
:
DataFrame
Creates a new DataFrame that uses the specified schema and contains the specified Row objects.
Creates a new DataFrame that uses the specified schema and contains the specified Row objects.
For example, the following code creates a DataFrame containing three columns of the types
int
,string
, andvariant
with a single row of data:import com.snowflake.snowpark.types._ ... // Create a sequence of a single Row object containing data. val data = Seq(Row(1, "a", new Variant(1))) // Define the schema for the columns in the DataFrame. val schema = StructType(Seq(StructField("int", IntegerType), StructField("string", StringType), StructField("variant", VariantType))) // Create the DataFrame. val df = session.createDataFrame(data, schema)
- data
-
A sequence of Row objects representing rows of data.
- schema
-
StructType representing the schema for the DataFrame.
- returns
- Since
-
0.2.0
-
def
createDataFrame
[
T
]
(
data:
Seq
[
T
]
)
(
implicit
arg0:
scala.reflect.api.JavaUniverse.TypeTag
[
T
]
)
:
DataFrame
Creates a new DataFrame containing the specified values.
Creates a new DataFrame containing the specified values. Currently, you can use values of the following types:
- Base types (Int, Short, String etc.). The resulting DataFrame has the column name "VALUE".
- Tuples consisting of base types. The resulting DataFrame has the column names "_1", "_2", etc.
- Case classes consisting of base types. The resulting DataFrame has column names that correspond to the case class constituents.
If you want to create a DataFrame by calling the
toDF
method of aSeq
object, importsession.implicits._
, wheresession
is an object of theSession
class that you created to connect to the Snowflake database. For example:val session = Session.builder.configFile(..).create // Importing this allows you to call the toDF method on a Seq object. import session.implicits._ // Create a DataFrame from a Seq object. val df = Seq((1, "x"), (2, "y"), (3, "z")).toDF("numCol", "varcharCol") df.show()
- T
-
DataType
- data
-
A sequence in which each element represents a row of values in the DataFrame.
- returns
- Since
-
0.1.0
-
final
def
eq
(
arg0:
AnyRef
)
:
Boolean
- Definition Classes
- AnyRef
-
def
equals
(
arg0:
Any
)
:
Boolean
- Definition Classes
- AnyRef → Any
-
lazy val
file
:
FileOperation
Returns a FileOperation object that you can use to perform file operations on stages.
Returns a FileOperation object that you can use to perform file operations on stages. For example:
session.file.put("file:///tmp/file1.csv", "@myStage/prefix1") session.file.get("@myStage/prefix1", "file:///tmp")
- Since
-
0.4.0
-
def
flatten
(
input:
Column
,
path:
String
,
outer:
Boolean
,
recursive:
Boolean
,
mode:
String
)
:
DataFrame
Creates a new DataFrame by flattening compound values into multiple rows.
Creates a new DataFrame by flattening compound values into multiple rows.
for example:
import com.snowflake.snowpark.functions._ val df = session.flatten(parse_json(lit("""{"a":[1,2]}""")), "a", false, false, "BOTH")
- input
-
The expression that will be unseated into rows. The expression must be of data type VARIANT, OBJECT, or ARRAY.
- path
-
The path to the element within a VARIANT data structure which needs to be flattened. Can be a zero-length string (i.e. empty path) if the outermost element is to be flattened.
- outer
-
If
false
, any input rows that cannot be expanded, either because they cannot be accessed in the path or because they have zero fields or entries, are completely omitted from the output. Otherwise, exactly one row is generated for zero-row expansions (with NULL in the KEY, INDEX, and VALUE columns). - recursive
-
If
false
, only the element referenced by PATH is expanded. Otherwise, the expansion is performed for all sub-elements recursively. - mode
-
Specifies which types should be flattened (
"OBJECT"
,"ARRAY"
, or"BOTH"
).
- Since
-
0.2.0
-
def
flatten
(
input:
Column
)
:
DataFrame
Creates a new DataFrame by flattening compound values into multiple rows.
Creates a new DataFrame by flattening compound values into multiple rows.
For example:
import com.snowflake.snowpark.functions._ val df = session.flatten(parse_json(lit("""{"a":[1,2]}""")))
- input
-
The expression that will be unseated into rows. The expression must be of data type VARIANT, OBJECT, or ARRAY.
- returns
-
A DataFrame .
- Since
-
0.2.0
-
def
generator
(
rowCount:
Long
,
col:
Column
,
cols:
Column
*
)
:
DataFrame
Creates a new DataFrame via Generator function.
Creates a new DataFrame via Generator function.
For example:
import com.snowflake.snowpark.functions._ session.generator(10, seq4(), uniform(lit(1), lit(5), random())).show()
- rowCount
-
The row count of the result DataFrame.
- col
-
the column of the result DataFrame
- cols
-
A list of columns excepts the first column
- returns
- Since
-
0.11.0
-
def
generator
(
rowCount:
Long
,
columns:
Seq
[
Column
]
)
:
DataFrame
Creates a new DataFrame via Generator function.
Creates a new DataFrame via Generator function.
For example:
import com.snowflake.snowpark.functions._ session.generator(10, Seq(seq4(), uniform(lit(1), lit(5), random()))).show()
- rowCount
-
The row count of the result DataFrame.
- columns
-
the column list of the result DataFrame
- returns
- Since
-
0.11.0
-
final
def
getClass
()
:
Class
[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native () @HotSpotIntrinsicCandidate ()
-
def
getCurrentDatabase
:
Option
[
String
]
Returns the name of the current database for the JDBC session attached to this session.
Returns the name of the current database for the JDBC session attached to this session.
For example, if you change the current database by executing the following code:
session.sql("use database newDB").collect()
the method returns
newDB
.- returns
-
The name of the current database for this session.
- Since
-
0.1.0
-
def
getCurrentSchema
:
Option
[
String
]
Returns the name of the current schema for the JDBC session attached to this session.
Returns the name of the current schema for the JDBC session attached to this session.
For example, if you change the current schema by executing the following code:
session.sql("use schema newSchema").collect()
the method returns
newSchema
.- returns
-
Current schema in session.
- Since
-
0.1.0
-
def
getDefaultDatabase
:
Option
[
String
]
Returns the name of the default database configured for this session in Session.builder .
Returns the name of the default database configured for this session in Session.builder .
- returns
-
The name of the default database
- Since
-
0.1.0
-
def
getDefaultSchema
:
Option
[
String
]
Returns the name of the default schema configured for this session in Session.builder .
Returns the name of the default schema configured for this session in Session.builder .
- returns
-
The name of the default schema
- Since
-
0.1.0
-
def
getDependencies
:
Set
[
URI
]
Returns the list of URLs for all the dependencies that were added for user-defined functions (UDFs).
Returns the list of URLs for all the dependencies that were added for user-defined functions (UDFs). This list includes any JAR files that were added automatically by the library.
- returns
-
Set[URI]
- Since
-
0.1.0
-
def
getDependenciesAsJavaSet
:
Set
[
URI
]
Returns a Java Set of URLs for all the dependencies that were added for user-defined functions (UDFs).
Returns a Java Set of URLs for all the dependencies that were added for user-defined functions (UDFs). This list includes any JAR files that were added automatically by the library.
- Since
-
0.2.0
-
def
getFullyQualifiedCurrentSchema
:
String
Returns the fully qualified name of the current schema for the session.
Returns the fully qualified name of the current schema for the session.
- returns
-
The fully qualified name of the schema
- Since
-
0.2.0
-
def
getQueryTag
()
:
Option
[
String
]
Returns the query tag that you set by calling setQueryTag .
Returns the query tag that you set by calling setQueryTag .
- Since
-
0.1.0
-
def
getSessionInfo
()
:
String
Get the session information.
Get the session information.
- Since
-
0.11.0
-
def
getSessionStage
:
String
Returns the name of the temporary stage created by the Snowpark library for uploading and store temporary artifacts for this session.
Returns the name of the temporary stage created by the Snowpark library for uploading and store temporary artifacts for this session. These artifacts include classes for UDFs that you define in this session and dependencies that you add when calling addDependency .
- returns
-
The name of stage.
- Since
-
0.1.0
-
def
hashCode
()
:
Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native () @HotSpotIntrinsicCandidate ()
-
final
def
isInstanceOf
[
T0
]
:
Boolean
- Definition Classes
- Any
-
def
jdbcConnection
:
Connection
Returns the JDBC Connection object used for the connection to the Snowflake database.
Returns the JDBC Connection object used for the connection to the Snowflake database.
- returns
-
JDBC Connection object
-
def
log
()
:
Logger
- Attributes
- protected[ internal ]
- Definition Classes
- Logging
-
def
logDebug
(
msg:
String
,
throwable:
Throwable
)
:
Unit
- Attributes
- protected[ internal ]
- Definition Classes
- Logging
-
def
logDebug
(
msg:
String
)
:
Unit
- Attributes
- protected[ internal ]
- Definition Classes
- Logging
-
def
logError
(
msg:
String
,
throwable:
Throwable
)
:
Unit
- Attributes
- protected[ internal ]
- Definition Classes
- Logging
-
def
logError
(
msg:
String
)
:
Unit
- Attributes
- protected[ internal ]
- Definition Classes
- Logging
-
def
logInfo
(
msg:
String
,
throwable:
Throwable
)
:
Unit
- Attributes
- protected[ internal ]
- Definition Classes
- Logging
-
def
logInfo
(
msg:
String
)
:
Unit
- Attributes
- protected[ internal ]
- Definition Classes
- Logging
-
def
logTrace
(
msg:
String
,
throwable:
Throwable
)
:
Unit
- Attributes
- protected[ internal ]
- Definition Classes
- Logging
-
def
logTrace
(
msg:
String
)
:
Unit
- Attributes
- protected[ internal ]
- Definition Classes
- Logging
-
def
logWarning
(
msg:
String
,
throwable:
Throwable
)
:
Unit
- Attributes
- protected[ internal ]
- Definition Classes
- Logging
-
def
logWarning
(
msg:
String
)
:
Unit
- Attributes
- protected[ internal ]
- Definition Classes
- Logging
-
final
def
ne
(
arg0:
AnyRef
)
:
Boolean
- Definition Classes
- AnyRef
-
final
def
notify
()
:
Unit
- Definition Classes
- AnyRef
- Annotations
- @native () @HotSpotIntrinsicCandidate ()
-
final
def
notifyAll
()
:
Unit
- Definition Classes
- AnyRef
- Annotations
- @native () @HotSpotIntrinsicCandidate ()
-
def
range
(
start:
Long
,
end:
Long
)
:
DataFrame
Creates a new DataFrame from a range of numbers.
Creates a new DataFrame from a range of numbers. The resulting DataFrame has the column name "ID" and a row for each number in the sequence.
- start
-
Start of the range.
- end
-
End of the range.
- returns
- Since
-
0.1.0
-
def
range
(
end:
Long
)
:
DataFrame
Creates a new DataFrame from a range of numbers starting from 0.
Creates a new DataFrame from a range of numbers starting from 0. The resulting DataFrame has the column name "ID" and a row for each number in the sequence.
- end
-
End of the range.
- returns
- Since
-
0.1.0
-
def
range
(
start:
Long
,
end:
Long
,
step:
Long
)
:
DataFrame
Creates a new DataFrame from a range of numbers.
Creates a new DataFrame from a range of numbers. The resulting DataFrame has the column name "ID" and a row for each number in the sequence.
- start
-
Start of the range.
- end
-
End of the range.
- step
-
Step function for producing the numbers in the range.
- returns
- Since
-
0.1.0
-
def
read
:
DataFrameReader
Returns a DataFrameReader that you can use to read data from various supported sources (e.g.
Returns a DataFrameReader that you can use to read data from various supported sources (e.g. a file in a stage) as a DataFrame.
- returns
- Since
-
0.1.0
-
def
removeDependency
(
path:
String
)
:
Unit
Removes a path from the set of dependencies.
Removes a path from the set of dependencies.
- path
-
Path to a local directory, local file, or file in a stage.
- Since
-
0.1.0
-
def
setQueryTag
(
queryTag:
String
)
:
Unit
Sets a query tag for this session.
Sets a query tag for this session. You can use the query tag to find all queries run for this session.
If not set, the default value of query tag is the Snowpark library call and the class and method in your code that invoked the query (e.g.
com.snowflake.snowpark.DataFrame.collect Main$.main(Main.scala:18)
).- queryTag
-
String to use as the query tag for this session.
- Since
-
0.1.0
-
lazy val
sproc
:
SProcRegistration
Returns a SProcRegistration object that you can use to register Stored Procedures.
Returns a SProcRegistration object that you can use to register Stored Procedures. For example:
val sp = session.sproc.registerTemporary((session: Session, num: Int) => num * 2) session.storedProcedure(sp, 100).show()
- Annotations
- @ PublicPreview ()
- Since
-
1.8.0
-
def
sql
(
query:
String
)
:
DataFrame
Returns a new DataFrame representing the results of a SQL query.
Returns a new DataFrame representing the results of a SQL query.
You can use this method to execute an arbitrary SQL statement.
- query
-
The SQL statement to execute.
- returns
- Since
-
0.1.0
-
def
storedProcedure
(
sp:
StoredProcedure
,
args:
Any
*
)
:
DataFrame
Creates a new DataFrame from the given Stored Procedure and arguments.
Creates a new DataFrame from the given Stored Procedure and arguments.
val sp = session.sproc.register(...) session.storedProcedure( sp, "arg1", "arg2" ).show()
- sp
-
The stored procedures object, can be created by
Session.sproc.register
methods. - args
-
The arguments of the given stored procedure
- Since
-
1.8.0
-
def
storedProcedure
(
spName:
String
,
args:
Any
*
)
:
DataFrame
Creates a new DataFrame from the given Stored Procedure and arguments.
Creates a new DataFrame from the given Stored Procedure and arguments.
session.storedProcedure( "sp_name", "arg1", "arg2" ).show()
- spName
-
The name of stored procedures.
- args
-
The arguments of the given stored procedure
- Since
-
1.8.0
-
final
def
synchronized
[
T0
]
(
arg0: ⇒
T0
)
:
T0
- Definition Classes
- AnyRef
-
def
table
(
multipartIdentifier:
Array
[
String
]
)
:
Updatable
Returns an Updatable that points to the specified table.
Returns an Updatable that points to the specified table.
- multipartIdentifier
-
An array of strings that specify the database name, schema name, and table name.
- Since
-
0.7.0
-
def
table
(
multipartIdentifier:
List
[
String
]
)
:
Updatable
Returns an Updatable that points to the specified table.
Returns an Updatable that points to the specified table.
- multipartIdentifier
-
A list of strings that specify the database name, schema name, and table name.
- returns
- Since
-
0.2.0
-
def
table
(
multipartIdentifier:
Seq
[
String
]
)
:
Updatable
Returns an Updatable that points to the specified table.
Returns an Updatable that points to the specified table.
- multipartIdentifier
-
A sequence of strings that specify the database name, schema name, and table name (e.g.
Seq("database_name", "schema_name", "table_name")
). - returns
- Since
-
0.1.0
-
def
table
(
name:
String
)
:
Updatable
Returns an Updatable that points to the specified table.
Returns an Updatable that points to the specified table.
name
can be a fully qualified identifier and must conform to the rules for a Snowflake identifier.- name
-
Table name that is either a fully qualified name or a name in the current database/schema.
- returns
- Since
-
0.1.0
-
def
tableFunction
(
func:
Column
)
:
DataFrame
Creates a new DataFrame from the given table function.
Creates a new DataFrame from the given table function.
Example
import com.snowflake.snowpark.functions._ import com.snowflake.snowpark.tableFunctions._ session.tableFunction( flatten(parse_json(lit("[1,2]"))) )
- func
-
Table function object, can be created from TableFunction class or referred from the built-in list from tableFunctions.
- Since
-
1.10.0
-
def
tableFunction
(
func:
TableFunction
,
args:
Map
[
String
,
Column
]
)
:
DataFrame
Creates a new DataFrame from the given table function and arguments.
Creates a new DataFrame from the given table function and arguments.
Example
import com.snowflake.snowpark.functions._ import com.snowflake.snowpark.tableFunctions._ session.tableFunction( flatten, Map("input" -> parse_json(lit("[1,2]"))) ) // Since 1.8.0, DataFrame columns are accepted as table function arguments: df = Seq("[1,2]").toDF("a") session.tableFunction(( flatten, Map("input" -> parse_json(df("a"))) )
- func
-
Table function object, can be created from TableFunction class or referred from the built-in list from tableFunctions.
- args
-
function arguments map of the given table function. Some functions, like flatten, have named parameters. use this map to assign values to the corresponding parameters.
- Since
-
0.4.0
-
def
tableFunction
(
func:
TableFunction
,
args:
Seq
[
Column
]
)
:
DataFrame
Creates a new DataFrame from the given table function and arguments.
Creates a new DataFrame from the given table function and arguments.
Example
import com.snowflake.snowpark.functions._ import com.snowflake.snowpark.tableFunctions._ session.tableFunction( split_to_table, Seq(lit("split by space"), lit(" ")) ) // Since 1.8.0, DataFrame columns are accepted as table function arguments: df = Seq(Seq("split by space", " ")).toDF(Seq("a", "b")) session.tableFunction(( split_to_table, Seq(df("a"), df("b")) )
- func
-
Table function object, can be created from TableFunction class or referred from the built-in list from tableFunctions.
- args
-
function arguments of the given table function.
- Since
-
0.4.0
-
def
tableFunction
(
func:
TableFunction
,
firstArg:
Column
,
remaining:
Column
*
)
:
DataFrame
Creates a new DataFrame from the given table function and arguments.
Creates a new DataFrame from the given table function and arguments.
Example
import com.snowflake.snowpark.functions._ import com.snowflake.snowpark.tableFunctions._ session.tableFunction( split_to_table, lit("split by space"), lit(" ") )
- func
-
Table function object, can be created from TableFunction class or referred from the built-in list from tableFunctions.
- firstArg
-
the first function argument of the given table function.
- remaining
-
all remaining function arguments.
- Since
-
0.4.0
-
def
toString
()
:
String
- Definition Classes
- AnyRef → Any
-
lazy val
udf
:
UDFRegistration
Returns a UDFRegistration object that you can use to register UDFs.
Returns a UDFRegistration object that you can use to register UDFs. For example:
session.udf.registerTemporary("mydoubleudf", (x: Int) => 2 * x) session.sql(s"SELECT mydoubleudf(c) FROM table")
- Since
-
0.1.0
-
lazy val
udtf
:
UDTFRegistration
Returns a UDTFRegistration object that you can use to register UDTFs.
Returns a UDTFRegistration object that you can use to register UDTFs. For example:
class MyWordSplitter extends UDTF1[String] { override def process(input: String): Iterable[Row] = input.split(" ").map(Row(_)) override def endPartition(): Iterable[Row] = Array.empty[Row] override def outputSchema(): StructType = StructType(StructField("word", StringType)) } val tableFunction = session.udtf.registerTemporary(new MyWordSplitter) session.tableFunction(tableFunction, lit("My name is Snow Park")).show()
- Since
-
1.2.0
-
def
unsetQueryTag
()
:
Unit
Unset query_tag parameter for this session.
Unset query_tag parameter for this session.
If not set, the default value of query tag is the Snowpark library call and the class and method in your code that invoked the query (e.g.
com.snowflake.snowpark.DataFrame.collect Main$.main(Main.scala:18)
).- Since
-
0.10.0
-
final
def
wait
(
arg0:
Long
,
arg1:
Int
)
:
Unit
- Definition Classes
- AnyRef
- Annotations
- @throws ( ... )
-
final
def
wait
(
arg0:
Long
)
:
Unit
- Definition Classes
- AnyRef
- Annotations
- @throws ( ... ) @native ()
-
final
def
wait
()
:
Unit
- Definition Classes
- AnyRef
- Annotations
- @throws ( ... )
-
object
implicits
extends
Implicits
with
Serializable
Provides implicit methods for convert Scala objects to Snowpark DataFrame and Column objects.
Provides implicit methods for convert Scala objects to Snowpark DataFrame and Column objects.
To use this, import
session.implicits._
:val session = Session.builder.configFile(..).create import session.implicits._
After you import this, you can call the
toDF
method of aSeq
to convert a sequence to a DataFrame:// Create a DataFrame from a local sequence of integers. val df = (1 to 10).toDF("a") val df = Seq((1, "one"), (2, "two")).toDF("a", "b")
You can also refer to columns in DataFrames by using
$"colName"
and'colName
:// Refer to a column in a DataFrame by using $"colName". val df = session.table("T").filter($"a" > 1) // Refer to columns by using 'colName. val df = session.table("T").filter('a > 1)
- Since
-
0.1.0
Deprecated Value Members
-
def
finalize
()
:
Unit
- Attributes
- protected[ lang ]
- Definition Classes
- AnyRef
- Annotations
- @throws ( classOf[java.lang.Throwable] ) @Deprecated
- Deprecated