Snowpark Migration Accelerator: 지원 플랫폼

지원되는 플랫폼

The Snowpark Migration Accelerator (SMA) currently supports the following programming languages as source code:

  • Python

  • Scala

  • SQL

The SMA analyzes both code files and notebook files to identify any usage of Spark API and other third-party APIs. For a complete list of file types that SMA can analyze, please refer to Supported Filetypes.

SQL 언어

Snowpark Migration Accelerator(SMA)는 코드 파일을 분석하여 SQL 요소를 식별할 수 있습니다. 현재 SMA 는 다음 형식으로 작성된 SQL 코드를 감지할 수 있습니다.

  • Spark SQL

  • Hive QL

  • Databricks SQL

SQL Assessment and Conversion Guidelines

Spark SQL 와 Snowflake SQL 은 호환성이 높지만, 일부 SQL 코드는 완벽하게 변환되지 않을 수 있습니다.

SQL analysis is only possible when the SQL is received in the following ways:

  • A SQL cell within a supported notebook file

  • A .sql or .hql file

  • A complete string passed to a spark.sql statement.

    Some variable substitutions are not supported. Here are a few examples:

    • Parsed:

      spark.sql("select * from TableA")
      
      Copy

      New SMA scenarios supported include the following:

      # explicit concatenation
      spark.sql("select * from TableA" + ' where col1 = 1')
      
      # implicit concatenation (juxtaposition)
      spark.sql("select * from TableA" ' where col1 = 1')
      
      # var initialized with sql in previous lines before execution on same scope
      sql = "select * from TableA"
      spark.sql(sql)
      
      # f-string interpolation:
      spark.sql(f"select * from {varTableA}")
      
      # format kindof interpolation
      spark.sql("select * from {}".format(varTableA))
      
      # mix var with concat and f-string interpolation
      sql = f"select * from {varTableA} " + f'where {varCol1} = 1'
      spark.sql(sql)
      
      Copy
    • Not Parsed:

      some_variable = "TableA"
      spark.sql("select * from" + some_variable)
      
      Copy

    SQL elements are accounted for in the object inventories, and a readiness score is generated specifically for SQL.