Snowpark Migration Accelerator: Unterstützte Plattformen¶
Unterstützte Plattformen¶
The Snowpark Migration Accelerator (SMA) currently supports the following programming languages as source code:
Python
Scala
SQL
The SMA analyzes both code files and notebook files to identify any usage of Spark API and other third-party APIs. For a complete list of file types that SMA can analyze, please refer to Supported Filetypes.
SQL-Dialekte¶
Der Snowpark Migration Accelerator (SMA) kann Codedateien analysieren, um SQL-Elemente zu identifizieren. Derzeit kann SMA SQL-Code erkennen, der in den folgenden Formaten geschrieben wurde:
Spark SQL
Hive QL
Databricks SQL
SQL Assessment and Conversion Guidelines¶
Obwohl Spark SQL und Snowflake SQL in hohem Maße kompatibel sind, kann es vorkommen, dass SQL-Code nicht perfekt konvertiert wird.
SQL analysis is only possible when the SQL is received in the following ways:
A SQL cell within a supported notebook file
A .sql or .hql file
A complete string passed to a
spark.sqlstatement.Some variable substitutions are not supported. Here are a few examples:
Parsed:
spark.sql("select * from TableA")
New SMA scenarios supported include the following:
# explicit concatenation spark.sql("select * from TableA" + ' where col1 = 1') # implicit concatenation (juxtaposition) spark.sql("select * from TableA" ' where col1 = 1') # var initialized with sql in previous lines before execution on same scope sql = "select * from TableA" spark.sql(sql) # f-string interpolation: spark.sql(f"select * from {varTableA}") # format kindof interpolation spark.sql("select * from {}".format(varTableA)) # mix var with concat and f-string interpolation sql = f"select * from {varTableA} " + f'where {varCol1} = 1' spark.sql(sql)
Not Parsed:
some_variable = "TableA" spark.sql("select * from" + some_variable)
SQL elements are accounted for in the object inventories, and a readiness score is generated specifically for SQL.