Snowpark Migration Accelerator: SQL Embedded code¶
Note
Currently, SMA only supports the *pyspark.sql* function.
SMA can transform SQL code that is embedded within Python or Scala files. It processes embedded SQL code in the following file extensions:
Python source code files (with .py extension)
Scala source code files (with .scala extension)
Jupyter Notebook files (with .ipynb extension)
Databricks source files (with .python or .scala extensions)
Databricks Notebook archive files (with .dbc extension)
Embedded SQL Code transformation Samples¶
Supported Case¶
Using the *spark.sql* function in Python to execute SQL queries:
Unsupported Cases¶
When SMA encounters code that it cannot convert, it generates an Error, Warning, and Issue (EWI) message in the output code. For more details about these messages, see EWI.
The following scenarios are not currently supported:
When working with SQL code, you can incorporate string variables in the following way:
Combining strings to build SQL code using simple concatenation:
Using string interpolation to dynamically generate SQL statements:
Using functions that generate SQL queries dynamically:
Unsupported Cases and EWI messages¶
When analyzing Scala code, the error code SPRKSCL1173 indicates unsupported embedded SQL statements.
When Python code contains unsupported embedded SQL statements, the error code SPRKPY1077 will be displayed.