Snowpark Migration Accelerator : Plateformes prises en charge

Plates-formes prises en charge

The Snowpark Migration Accelerator (SMA) currently supports the following programming languages as source code:

  • Python

  • Scala

  • SQL

The SMA analyzes both code files and notebook files to identify any usage of Spark API and other third-party APIs. For a complete list of file types that SMA can analyze, please refer to Supported Filetypes.

Dialectes SQL

L’outil Snowpark Migration Accelerator (SMA) peut analyser les fichiers de code pour identifier les éléments SQL. Actuellement, SMA peut détecter le code SQL écrit dans les formats suivants :

  • Spark SQL

  • Hive QL

  • Databricks SQL

SQL Assessment and Conversion Guidelines

Bien que Spark SQL et Snowflake SQL soient hautement compatibles, il se peut que certains codes SQL ne soient pas parfaitement convertis.

SQL analysis is only possible when the SQL is received in the following ways:

  • A SQL cell within a supported notebook file

  • A .sql or .hql file

  • A complete string passed to a spark.sql statement.

    Some variable substitutions are not supported. Here are a few examples:

    • Parsed:

      spark.sql("select * from TableA")
      
      Copy

      New SMA scenarios supported include the following:

      # explicit concatenation
      spark.sql("select * from TableA" + ' where col1 = 1')
      
      # implicit concatenation (juxtaposition)
      spark.sql("select * from TableA" ' where col1 = 1')
      
      # var initialized with sql in previous lines before execution on same scope
      sql = "select * from TableA"
      spark.sql(sql)
      
      # f-string interpolation:
      spark.sql(f"select * from {varTableA}")
      
      # format kindof interpolation
      spark.sql("select * from {}".format(varTableA))
      
      # mix var with concat and f-string interpolation
      sql = f"select * from {varTableA} " + f'where {varCol1} = 1'
      spark.sql(sql)
      
      Copy
    • Not Parsed:

      some_variable = "TableA"
      spark.sql("select * from" + some_variable)
      
      Copy

    SQL elements are accounted for in the object inventories, and a readiness score is generated specifically for SQL.