Snowpark Migration Accelerator: サポートされているプラットフォーム¶
サポートされているプラットフォーム¶
The Snowpark Migration Accelerator (SMA) currently supports the following programming languages as source code:
Python
Scala
SQL
The SMA analyzes both code files and notebook files to identify any usage of Spark API and other third-party APIs. For a complete list of file types that SMA can analyze, please refer to Supported Filetypes.
SQL 方言¶
Snowpark Migration Accelerator(SMA)は、コードファイルを分析して、 SQL 要素を特定することができます。現在、 SMA は以下の形式で書かれた SQL コードを検出することができます。
Spark SQL
Hive QL
Databricks SQL
SQL Assessment and Conversion Guidelines¶
Spark SQL とSnowflake SQL は高い互換性がありますが、 SQL コードによっては完全に変換できない場合があります。
SQL analysis is only possible when the SQL is received in the following ways:
A SQL cell within a supported notebook file
A .sql or .hql file
A complete string passed to a
spark.sqlstatement.Some variable substitutions are not supported. Here are a few examples:
Parsed:
spark.sql("select * from TableA")
New SMA scenarios supported include the following:
# explicit concatenation spark.sql("select * from TableA" + ' where col1 = 1') # implicit concatenation (juxtaposition) spark.sql("select * from TableA" ' where col1 = 1') # var initialized with sql in previous lines before execution on same scope sql = "select * from TableA" spark.sql(sql) # f-string interpolation: spark.sql(f"select * from {varTableA}") # format kindof interpolation spark.sql("select * from {}".format(varTableA)) # mix var with concat and f-string interpolation sql = f"select * from {varTableA} " + f'where {varCol1} = 1' spark.sql(sql)
Not Parsed:
some_variable = "TableA" spark.sql("select * from" + some_variable)
SQL elements are accounted for in the object inventories, and a readiness score is generated specifically for SQL.