Snowpark Migration Accelerator: 릴리스 노트

아래 릴리스 노트는 릴리스 날짜별로 조직되어 있습니다. 애플리케이션과 변환 코어의 버전 번호는 아래에 표시됩니다.

Version 3.2.0 (Mar 13, 2026)

Application & CLI Version: 3.2.0

Engine Release Notes

변경됨

  • Updated .NET version to v10.0.0.

  • Bumped Python AST and Parser version to v149.1.23.

  • The SMA now correctly identifies and reports usages of org.apache.spark.sql.functions.to_number and org.apache.spark.sql.functions.try_to_number as unsupported elements within the Snowpark API.

Desktop Release Notes

추가됨

  • Output folder validation: The output folder selection now validates for existing project files (.snowct). If the selected folder already contains a project, the selection is blocked and an error is displayed, preventing accidental overwrites.

  • Dual-assessment workflow: Automatically evaluates both SCOS and Snowpark API conversion paths, recommending the best fit for the user’s workload.

  • Two conversion modes: Snowpark Connect (SCOS) and Snowpark API are now available as distinct conversion targets.

  • Readiness score section: Displays the recommended conversion target with a color-coded compatibility score badge.

  • File compatibility breakdown: KPI cards showing fully compatible files, files requiring changes, and files with unsupported APIs.

  • Data distribution charts: Stacked bar charts showing input data sources and output data targets, grouped by platform.

  • Code dependencies: Interactive donut chart categorizing dependencies as Supported, Internal, or Unknown.

  • Issues by category: AI-enriched table grouping conversion issues into human-readable categories with file counts and key issue summaries. Displayed when a Snowflake session is active.

  • Execution summary: Project metadata, engine version information, and input/output folder references.

  • Performance: Assessment report data is cached for the duration of the session, enabling instant page loads when navigating back to results.

변경됨

  • Card layout improvements: Assessment and conversion cards were refactored for improving the user experience.

  • Assessment workflow shortcut: If an assessment has already been completed, clicking “Analyze code” navigates directly to the results page instead of re-running the assessment.

  • Renamed connection action: “Activate Assistant” is now “Connect to Snowflake” across the application header and connection dialog for clearer terminology.

Version 3.1.0 (Feb 27, 2026)

Application & CLI Version: 3.1.0

Included SMA Core Version

  • Snowpark Conversion Core: 8.1.60

Included SnowConvert AI Version

Engine Release Notes

추가됨

  • Added support for processing files located in a hidden folder (such as .databricks when exported from the source). These files are now correctly processed by the SMA.

  • Added 245 new PySpark elements to the SMA mapping table with a NotSupported status. These entries correspond to functions and methods introduced in PySpark 3.3.0 through 4.1.x:

    • 219 functions (pyspark.sql.functions)

    • 4 DataFrame methods

    • 3 Column methods

    • 5 Session methods

    • 2 ReadWriter methods

    • 12 Types classes

  • Added new EWIs for the following Pandas elements:

    • PNDSPY1019: pandas.core.arrays.datetimelike.DatelikeOps.strftime partial support

    • PNDSPY1020: pandas.core.arrays.datetimelike.TimelikeOps.ceil partial support

    • PNDSPY1021: pandas.core.arrays.datetimelike.TimelikeOps.floor partial support

    • PNDSPY1022: pandas.core.arrays.datetimelike.TimelikeOps.round partial support

    • PNDSPY1023: pandas.core.arrays.datetimes.DatetimeArray.day_name partial support

    • PNDSPY1024: pandas.core.arrays.datetimes.DatetimeArray.month_name partial support

    • PNDSPY1025: pandas.core.arrays.datetimes.DatetimeArray.tz_convert partial support

    • PNDSPY1026: pandas.core.arrays.datetimes.DatetimeArray.tz_localize partial support

    • PNDSPY1027: pandas.core.base.IndexOpsMixin.argmax partial support

    • PNDSPY1028: pandas.core.base.IndexOpsMixin.argmin partial support

    • PNDSPY1029: pandas.core.base.IndexOpsMixin.value_counts partial support

    • PNDSPY1030: pandas.core.frame.DataFrame.T partial support

    • PNDSPY1031: pandas.core.frame.DataFrame._*dataframe*_ partial support

    • PNDSPY1032: pandas.core.frame.DataFrame.add partial support

    • PNDSPY1033: pandas.core.frame.DataFrame.align partial support

    • PNDSPY1034: pandas.core.frame.DataFrame.all partial support

    • PNDSPY1035: pandas.core.frame.DataFrame.any partial support

    • PNDSPY1036: pandas.core.frame.DataFrame.applymap partial support

    • PNDSPY1037: pandas.core.frame.DataFrame.asfreq partial support

    • PNDSPY1038: pandas.core.frame.DataFrame.astype partial support

    • PNDSPY1039: pandas.core.frame.DataFrame.at partial support

    • PNDSPY1040: pandas.core.frame.DataFrame.backfill partial support

    • PNDSPY1041: pandas.core.frame.DataFrame.bfill partial support

    • PNDSPY1042: pandas.core.frame.DataFrame.compare partial support

    • PNDSPY1043: pandas.core.frame.DataFrame.corr partial support

    • PNDSPY1044: pandas.core.frame.DataFrame.cumsum partial support

    • PNDSPY1045: pandas.core.frame.DataFrame.div partial support

    • PNDSPY1046: pandas.core.frame.DataFrame.divide partial support

    • PNDSPY1047: pandas.core.frame.DataFrame.dropna partial support

    • PNDSPY1048: pandas.core.frame.DataFrame.eq partial support

    • PNDSPY1049: pandas.core.frame.DataFrame.eval partial support

    • PNDSPY1050: pandas.core.frame.DataFrame.expanding partial support

    • PNDSPY1051: pandas.core.frame.DataFrame.ffill partial support

    • PNDSPY1052: pandas.core.frame.DataFrame.fillna partial support

    • PNDSPY1053: pandas.core.frame.DataFrame.floordiv partial support

    • PNDSPY1054: pandas.core.frame.DataFrame.from_records partial support

    • PNDSPY1055: pandas.core.frame.DataFrame.ge partial support

    • PNDSPY1056: pandas.core.frame.DataFrame.groupby partial support

    • PNDSPY1057: pandas.core.frame.DataFrame.gt partial support

    • PNDSPY1058: pandas.core.frame.DataFrame.idxmax partial support

    • PNDSPY1059: pandas.core.frame.DataFrame.idxmin partial support

    • PNDSPY1060: pandas.core.frame.DataFrame.info partial support

    • PNDSPY1061: pandas.core.frame.DataFrame.join partial support

    • PNDSPY1062: pandas.core.frame.DataFrame.le partial support

    • PNDSPY1063: pandas.core.frame.DataFrame.loc partial support

    • PNDSPY1064: pandas.core.frame.DataFrame.lt partial support

    • PNDSPY1065: pandas.core.frame.DataFrame.map partial support

    • PNDSPY1066: pandas.core.frame.DataFrame.mask partial support

    • PNDSPY1067: pandas.core.frame.DataFrame.melt partial support

    • PNDSPY1068: pandas.core.frame.DataFrame.merge partial support

    • PNDSPY1069: pandas.core.frame.DataFrame.mod partial support

    • PNDSPY1070: pandas.core.frame.DataFrame.mul partial support

    • PNDSPY1071: pandas.core.frame.DataFrame.multiply partial support

    • PNDSPY1072: pandas.core.frame.DataFrame.ne partial support

    • PNDSPY1073: pandas.core.frame.DataFrame.nlargest partial support

    • PNDSPY1074: pandas.core.frame.DataFrame.nsmallest partial support

    • PNDSPY1075: pandas.core.frame.DataFrame.nunique partial support

    • PNDSPY1076: pandas.core.frame.DataFrame.pad partial support

    • PNDSPY1077: pandas.core.frame.DataFrame.pct_change partial support

    • PNDSPY1078: pandas.core.frame.DataFrame.pivot partial support

    • PNDSPY1079: pandas.core.frame.DataFrame.pivot_table partial support

    • PNDSPY1080: pandas.core.frame.DataFrame.pow partial support

    • PNDSPY1081: pandas.core.frame.DataFrame.quantile partial support

    • PNDSPY1082: pandas.core.frame.DataFrame.radd partial support

    • PNDSPY1083: pandas.core.frame.DataFrame.rank partial support

    • PNDSPY1084: pandas.core.frame.DataFrame.rdiv partial support

    • PNDSPY1085: pandas.core.frame.DataFrame.reindex partial support

    • PNDSPY1086: pandas.core.frame.DataFrame.rename partial support

    • PNDSPY1087: pandas.core.frame.DataFrame.replace partial support

    • PNDSPY1088: pandas.core.frame.DataFrame.resample partial support

    • PNDSPY1089: pandas.core.frame.DataFrame.rfloordiv partial support

    • PNDSPY1090: pandas.core.frame.DataFrame.rmod partial support

    • PNDSPY1091: pandas.core.frame.DataFrame.rmul partial support

    • PNDSPY1092: pandas.core.frame.DataFrame.rolling partial support

    • PNDSPY1093: pandas.core.frame.DataFrame.round partial support

    • PNDSPY1094: pandas.core.frame.DataFrame.rpow partial support

    • PNDSPY1095: pandas.core.frame.DataFrame.rsub partial support

    • PNDSPY1096: pandas.core.frame.DataFrame.rtruediv partial support

    • PNDSPY1097: pandas.core.frame.DataFrame.sample partial support

    • PNDSPY1098: pandas.core.frame.DataFrame.shift partial support

    • PNDSPY1099: pandas.core.frame.DataFrame.skew partial support

    • PNDSPY1100: pandas.core.frame.DataFrame.sort_index partial support

    • PNDSPY1101: pandas.core.frame.DataFrame.sort_values partial support

    • PNDSPY1102: pandas.core.frame.DataFrame.stack partial support

    • PNDSPY1103: pandas.core.frame.DataFrame.std partial support

    • PNDSPY1104: pandas.core.frame.DataFrame.sub partial support

    • PNDSPY1105: pandas.core.frame.DataFrame.subtract partial support

    • PNDSPY1106: pandas.core.frame.DataFrame.to_csv partial support

    • PNDSPY1107: pandas.core.frame.DataFrame.transform partial support

    • PNDSPY1108: pandas.core.frame.DataFrame.transpose partial support

    • PNDSPY1109: pandas.core.frame.DataFrame.truediv partial support

    • PNDSPY1110: pandas.core.frame.DataFrame.tz_convert partial support

    • PNDSPY1111: pandas.core.frame.DataFrame.tz_localize partial support

    • PNDSPY1112: pandas.core.frame.DataFrame.unstack partial support

    • PNDSPY1113: pandas.core.frame.DataFrame.var partial support

    • PNDSPY1114: pandas.core.frame.DataFrame.where partial support

    • PNDSPY1115: pandas.core.generic.NDFrame.shift partial support

    • PNDSPY1116: pandas.core.groupby.generic.DataFrameGroupBy.agg partial support

    • PNDSPY1117: pandas.core.groupby.generic.DataFrameGroupBy.aggregate partial support

    • PNDSPY1118: pandas.core.groupby.generic.DataFrameGroupBy.fillna partial support

    • PNDSPY1119: pandas.core.groupby.generic.DataFrameGroupBy.idxmax partial support

    • PNDSPY1120: pandas.core.groupby.generic.DataFrameGroupBy.idxmin partial support

    • PNDSPY1121: pandas.core.groupby.generic.DataFrameGroupBy.transform partial support

    • PNDSPY1122: pandas.core.groupby.generic.DataFrameGroupBy.value_counts partial support

    • PNDSPY1123: pandas.core.groupby.groupby.BaseGroupBy.get_group partial support

    • PNDSPY1124: pandas.core.groupby.groupby.GroupBy.all partial support

    • PNDSPY1125: pandas.core.groupby.groupby.GroupBy.any partial support

    • PNDSPY1126: pandas.core.groupby.groupby.GroupBy.apply partial support

    • PNDSPY1127: pandas.core.groupby.groupby.GroupBy.bfill partial support

    • PNDSPY1128: pandas.core.groupby.groupby.GroupBy.ffill partial support

    • PNDSPY1129: pandas.core.groupby.groupby.GroupBy.first partial support

    • PNDSPY1130: pandas.core.groupby.groupby.GroupBy.last partial support

    • PNDSPY1131: pandas.core.groupby.groupby.GroupBy.pct_change partial support

    • PNDSPY1132: pandas.core.groupby.groupby.GroupBy.quantile partial support

    • PNDSPY1133: pandas.core.groupby.groupby.GroupBy.resample partial support

    • PNDSPY1134: pandas.core.groupby.groupby.GroupBy.rolling partial support

    • PNDSPY1135: pandas.core.groupby.groupby.GroupBy.shift partial support

    • PNDSPY1136: pandas.core.groupby.groupby.GroupBy.std partial support

    • PNDSPY1137: pandas.core.groupby.groupby.GroupBy.var partial support

    • PNDSPY1138: pandas.core.indexes.base.Index.all partial support

    • PNDSPY1139: pandas.core.indexes.base.Index.any partial support

    • PNDSPY1140: pandas.core.indexes.base.Index.nlevels partial support

    • PNDSPY1141: pandas.core.indexes.base.Index.reindex partial support

    • PNDSPY1142: pandas.core.indexes.base.Index.sort_values partial support

    • PNDSPY1143: pandas.core.indexes.datetimes.DatetimeIndex.ceil partial support

    • PNDSPY1144: pandas.core.indexes.datetimes.DatetimeIndex.day_name partial support

    • PNDSPY1145: pandas.core.indexes.datetimes.DatetimeIndex.floor partial support

    • PNDSPY1146: pandas.core.indexes.datetimes.DatetimeIndex.month_name partial support

    • PNDSPY1147: pandas.core.indexes.datetimes.DatetimeIndex.round partial support

    • PNDSPY1148: pandas.core.indexes.datetimes.DatetimeIndex.std partial support

    • PNDSPY1149: pandas.core.indexes.datetimes.DatetimeIndex.tz_convert partial support

    • PNDSPY1150: pandas.core.indexes.datetimes.DatetimeIndex.tz_localize partial support

    • PNDSPY1151: pandas.core.indexes.datetimes.bdate_range partial support

    • PNDSPY1152: pandas.core.indexes.datetimes.date_range partial support

    • PNDSPY1153: pandas.core.resample.Resampler.asfreq partial support

    • PNDSPY1154: pandas.core.resample.Resampler.bfill partial support

    • PNDSPY1155: pandas.core.resample.Resampler.ffill partial support

    • PNDSPY1156: pandas.core.resample.Resampler.fillna partial support

    • PNDSPY1157: pandas.core.resample.Resampler.first partial support

    • PNDSPY1158: pandas.core.resample.Resampler.last partial support

    • PNDSPY1159: pandas.core.resample.Resampler.quantile partial support

    • PNDSPY1160: pandas.core.resample.Resampler.std partial support

    • PNDSPY1161: pandas.core.resample.Resampler.var partial support

    • PNDSPY1162: pandas.core.reshape.concat.concat partial support

    • PNDSPY1163: pandas.core.reshape.melt.melt partial support

    • PNDSPY1164: pandas.core.reshape.merge.merge partial support

    • PNDSPY1165: pandas.core.reshape.merge.merge_asof partial support

    • PNDSPY1166: pandas.core.reshape.pivot.crosstab partial support

    • PNDSPY1167: pandas.core.reshape.pivot.pivot partial support

    • PNDSPY1168: pandas.core.reshape.pivot.pivot_table partial support

    • PNDSPY1169: pandas.core.reshape.tile.cut partial support

    • PNDSPY1170: pandas.core.reshape.tile.qcut partial support

    • PNDSPY1171: pandas.core.series.Series.add partial support

    • PNDSPY1172: pandas.core.series.Series.all partial support

    • PNDSPY1173: pandas.core.series.Series.any partial support

    • PNDSPY1174: pandas.core.series.Series.case_when partial support

    • PNDSPY1175: pandas.core.series.Series.compare partial support

    • PNDSPY1176: pandas.core.series.Series.cumsum partial support

    • PNDSPY1177: pandas.core.series.Series.div partial support

    • PNDSPY1178: pandas.core.series.Series.divide partial support

    • PNDSPY1179: pandas.core.series.Series.dropna partial support

    • PNDSPY1180: pandas.core.series.Series.eq partial support

    • PNDSPY1181: pandas.core.series.Series.flags partial support

    • PNDSPY1182: pandas.core.series.Series.floordiv partial support

    • PNDSPY1183: pandas.core.series.Series.ge partial support

    • PNDSPY1184: pandas.core.series.Series.groupby partial support

    • PNDSPY1185: pandas.core.series.Series.gt partial support

    • PNDSPY1186: pandas.core.series.Series.le partial support

    • PNDSPY1187: pandas.core.series.Series.lt partial support

    • PNDSPY1188: pandas.core.series.Series.map partial support

    • PNDSPY1189: pandas.core.series.Series.mod partial support

    • PNDSPY1190: pandas.core.series.Series.mul partial support

    • PNDSPY1191: pandas.core.series.Series.multiply partial support

    • PNDSPY1192: pandas.core.series.Series.ne partial support

    • PNDSPY1193: pandas.core.series.Series.nlargest partial support

    • PNDSPY1194: pandas.core.series.Series.nsmallest partial support

    • PNDSPY1195: pandas.core.series.Series.pow partial support

    • PNDSPY1196: pandas.core.series.Series.quantile partial support

    • PNDSPY1197: pandas.core.series.Series.radd partial support

    • PNDSPY1198: pandas.core.series.Series.rdiv partial support

    • PNDSPY1199: pandas.core.series.Series.reindex partial support

    • PNDSPY1200: pandas.core.series.Series.rename partial support

    • PNDSPY1201: pandas.core.series.Series.rfloordiv partial support

    • PNDSPY1202: pandas.core.series.Series.rmod partial support

    • PNDSPY1203: pandas.core.series.Series.rmul partial support

    • PNDSPY1204: pandas.core.series.Series.rpow partial support

    • PNDSPY1205: pandas.core.series.Series.rsub partial support

    • PNDSPY1206: pandas.core.series.Series.rtruediv partial support

    • PNDSPY1207: pandas.core.series.Series.skew partial support

    • PNDSPY1208: pandas.core.series.Series.sort_index partial support

    • PNDSPY1209: pandas.core.series.Series.sort_values partial support

    • PNDSPY1210: pandas.core.series.Series.std partial support

    • PNDSPY1211: pandas.core.series.Series.sub partial support

    • PNDSPY1212: pandas.core.series.Series.subtract partial support

    • PNDSPY1213: pandas.core.series.Series.truediv partial support

    • PNDSPY1214: pandas.core.series.Series.unstack partial support

    • PNDSPY1215: pandas.core.series.Series.var partial support

    • PNDSPY1216: pandas.core.strings.accessor.StringMethods._*getitem*_ partial support

    • PNDSPY1217: pandas.core.strings.accessor.StringMethods.contains partial support

    • PNDSPY1218: pandas.core.strings.accessor.StringMethods.endswith partial support

    • PNDSPY1219: pandas.core.strings.accessor.StringMethods.get partial support

    • PNDSPY1220: pandas.core.strings.accessor.StringMethods.isdigit partial support

    • PNDSPY1221: pandas.core.strings.accessor.StringMethods.len partial support

    • PNDSPY1222: pandas.core.strings.accessor.StringMethods.lstrip partial support

    • PNDSPY1223: pandas.core.strings.accessor.StringMethods.replace partial support

    • PNDSPY1224: pandas.core.strings.accessor.StringMethods.rstrip partial support

    • PNDSPY1225: pandas.core.strings.accessor.StringMethods.slice partial support

    • PNDSPY1226: pandas.core.strings.accessor.StringMethods.split partial support

    • PNDSPY1227: pandas.core.strings.accessor.StringMethods.startswith partial support

    • PNDSPY1228: pandas.core.strings.accessor.StringMethods.strip partial support

    • PNDSPY1229: pandas.core.strings.accessor.StringMethods.translate partial support

    • PNDSPY1230: pandas.core.tools.datetimes.to_datetime partial support

    • PNDSPY1231: pandas.core.tools.numeric.to_numeric partial support

    • PNDSPY1232: pandas.core.tools.timedeltas.to_timedelta partial support

    • PNDSPY1233: pandas.core.window.ewm.ExponentialMovingWindow.corr partial support

    • PNDSPY1234: pandas.core.window.ewm.ExponentialMovingWindow.mean partial support

    • PNDSPY1235: pandas.core.window.ewm.ExponentialMovingWindow.std partial support

    • PNDSPY1236: pandas.core.window.ewm.ExponentialMovingWindow.sum partial support

    • PNDSPY1237: pandas.core.window.ewm.ExponentialMovingWindow.var partial support

    • PNDSPY1238: pandas.core.window.expanding.Expanding.corr partial support

    • PNDSPY1239: pandas.core.window.expanding.Expanding.count partial support

    • PNDSPY1240: pandas.core.window.expanding.Expanding.max partial support

    • PNDSPY1241: pandas.core.window.expanding.Expanding.mean partial support

    • PNDSPY1242: pandas.core.window.expanding.Expanding.min partial support

    • PNDSPY1243: pandas.core.window.expanding.Expanding.sem partial support

    • PNDSPY1244: pandas.core.window.expanding.Expanding.std partial support

    • PNDSPY1245: pandas.core.window.expanding.Expanding.sum partial support

    • PNDSPY1246: pandas.core.window.expanding.Expanding.var partial support

    • PNDSPY1247: pandas.core.window.rolling.Rolling.corr partial support

    • PNDSPY1248: pandas.core.window.rolling.Rolling.count partial support

    • PNDSPY1249: pandas.core.window.rolling.Rolling.max partial support

    • PNDSPY1250: pandas.core.window.rolling.Rolling.mean partial support

    • PNDSPY1251: pandas.core.window.rolling.Rolling.min partial support

    • PNDSPY1252: pandas.core.window.rolling.Rolling.sem partial support

    • PNDSPY1253: pandas.core.window.rolling.Rolling.std partial support

    • PNDSPY1254: pandas.core.window.rolling.Rolling.sum partial support

    • PNDSPY1255: pandas.core.window.rolling.Rolling.var partial support

    • PNDSPY1256: pandas.core.window.rolling.Window.mean partial support

    • PNDSPY1257: pandas.core.window.rolling.Window.std partial support

    • PNDSPY1258: pandas.core.window.rolling.Window.sum partial support

    • PNDSPY1259: pandas.core.window.rolling.Window.var partial support

    • PNDSPY1260: pandas.io.json._json.read_json partial support

    • PNDSPY1261: pandas.io.parquet.read_parquet partial support

    • PNDSPY1262: pandas.io.parsers.readers.read_csv partial support

변경됨

  • Updated the sfutils library implementation to support multiple levels of notebooks calls

  • Upgraded supported Snowpark Python version from v1.41.0 to v1.43.0. This upgrade includes the following mapping status changes:

    NotSupported → Direct (8 functions):

    • pyspark.sql.functions.bool_andsnowflake.snowpark.functions.booland_agg

    • pyspark.sql.functions.bucketsnowflake.snowpark.functions.bucket

    • pyspark.sql.functions.cotsnowflake.snowpark.functions.cot

    • pyspark.sql.functions.daysnowflake.snowpark.functions.day

    • pyspark.sql.functions.everysnowflake.snowpark.functions.booland_agg

    • pyspark.sql.functions.pisnowflake.snowpark.functions.pi

    • pyspark.sql.functions.width_bucketsnowflake.snowpark.functions.width_bucket

    • pyspark.sql.functions.zeroifnullsnowflake.snowpark.functions.zeroifnull

NotSupported → Rename (1 function):

  • pyspark.sql.functions.uuidsnowflake.snowpark.functions.uuid_string

  • Upgraded supported Snowpark Pandas version from v1.41.0 to v1.43.0.

  • The mapping status of the following Pandas elements were updated:

    NotSupported → Direct (56 functions):

    • pandas.core.arrays.datetimes.DatetimeArray.date

    • pandas.core.arrays.datetimes.DatetimeArray.normalize

    • pandas.core.arrays.datetimes.DatetimeArray.time

    • pandas.core.base.IndexOpsMixin.T

    • pandas.core.base.IndexOpsMixin.empty

    • pandas.core.base.IndexOpsMixin.is_monotonic_decreasing

    • pandas.core.base.IndexOpsMixin.is_monotonic_increasing

    • pandas.core.base.IndexOpsMixin.is_unique

    • pandas.core.base.IndexOpsMixin.item

    • pandas.core.base.IndexOpsMixin.ndim

    • pandas.core.base.IndexOpsMixin.nunique

    • pandas.core.base.IndexOpsMixin.shape

    • pandas.core.base.IndexOpsMixin.size

    • pandas.core.base.IndexOpsMixin.to_list

    • pandas.core.base.IndexOpsMixin.to_numpy

    • pandas.core.base.IndexOpsMixin.tolist

    • pandas.core.base.IndexOpsMixin.transpose

    • pandas.core.generic.NDFrame.abs

    • pandas.core.generic.NDFrame.add_prefix

    • pandas.core.generic.NDFrame.add_suffix

    • pandas.core.generic.NDFrame.attrs

    • pandas.core.generic.NDFrame.copy

    • pandas.core.generic.NDFrame.describe

    • pandas.core.generic.NDFrame.dtypes

    • pandas.core.generic.NDFrame.equals

    • pandas.core.generic.NDFrame.first

    • pandas.core.generic.NDFrame.first_valid_index

    • pandas.core.generic.NDFrame.get

    • pandas.core.generic.NDFrame.head

    • pandas.core.generic.NDFrame.keys

    • pandas.core.generic.NDFrame.last

    • pandas.core.generic.NDFrame.last_valid_index

    • pandas.core.generic.NDFrame.ndim

    • pandas.core.generic.NDFrame.size

    • pandas.core.generic.NDFrame.squeeze

    • pandas.core.generic.NDFrame.tail

    • pandas.core.generic.NDFrame.take

    • pandas.core.generic.NDFrame.to_excel

    • pandas.core.groupby.groupby.BaseGroupBy.groups

    • pandas.core.groupby.groupby.GroupBy.count

    • pandas.core.groupby.groupby.GroupBy.cumcount

    • pandas.core.groupby.groupby.GroupBy.cummax

    • pandas.core.groupby.groupby.GroupBy.cummin

    • pandas.core.groupby.groupby.GroupBy.cumsum

    • pandas.core.groupby.groupby.GroupBy.head

    • pandas.core.groupby.groupby.GroupBy.max

    • pandas.core.groupby.groupby.GroupBy.mean

    • pandas.core.groupby.groupby.GroupBy.median

    • pandas.core.groupby.groupby.GroupBy.min

    • pandas.core.groupby.groupby.GroupBy.rank

    • pandas.core.groupby.groupby.GroupBy.size

    • pandas.core.groupby.groupby.GroupBy.tail

    • pandas.core.indexes.datetimes.DatetimeIndex.year

    • pandas.core.indexing.IndexingMixin.iat

    • pandas.core.indexing.IndexingMixin.iloc

    • pandas.core.series.Series.first

NotSupported → Partial (70 functions):

  • pandas.core.arrays.datetimelike.DatelikeOps.strftime (PNDSPY1019)

  • pandas.core.arrays.datetimelike.TimelikeOps.ceil (PNDSPY1020)

  • pandas.core.arrays.datetimelike.TimelikeOps.floor (PNDSPY1021)

  • pandas.core.arrays.datetimelike.TimelikeOps.round (PNDSPY1022)

  • pandas.core.arrays.datetimes.DatetimeArray.day_name (PNDSPY1023)

  • pandas.core.arrays.datetimes.DatetimeArray.month_name (PNDSPY1024)

  • pandas.core.arrays.datetimes.DatetimeArray.tz_convert (PNDSPY1025)

  • pandas.core.arrays.datetimes.DatetimeArray.tz_localize (PNDSPY1026)

  • pandas.core.base.IndexOpsMixin.argmax (PNDSPY1027)

  • pandas.core.base.IndexOpsMixin.argmin (PNDSPY1028)

  • pandas.core.base.IndexOpsMixin.value_counts (PNDSPY1029)

  • pandas.core.frame.DataFrame.eval (PNDSPY1049)

  • pandas.core.frame.DataFrame.expanding (PNDSPY1050)

  • pandas.core.frame.DataFrame.melt (PNDSPY1067)

  • pandas.core.frame.DataFrame.pct_change (PNDSPY1077)

  • pandas.core.frame.DataFrame.quantile (PNDSPY1081)

  • pandas.core.frame.DataFrame.std (PNDSPY1103)

  • pandas.core.generic.NDFrame.asfreq (PNDSPY1037)

  • pandas.core.generic.NDFrame.fillna (PNDSPY1052)

  • pandas.core.generic.NDFrame.mask (PNDSPY1066)

  • pandas.core.generic.NDFrame.pct_change (PNDSPY1077)

  • pandas.core.generic.NDFrame.rank (PNDSPY1083)

  • pandas.core.generic.NDFrame.replace (PNDSPY1087)

  • pandas.core.generic.NDFrame.shift (PNDSPY1115)

  • pandas.core.generic.NDFrame.to_csv (PNDSPY1106)

  • pandas.core.generic.NDFrame.tz_convert (PNDSPY1110)

  • pandas.core.generic.NDFrame.tz_localize (PNDSPY1111)

  • pandas.core.generic.NDFrame.where (PNDSPY1114)

  • pandas.core.groupby.generic.DataFrameGroupBy.transform (PNDSPY1121)

  • pandas.core.groupby.generic.DataFrameGroupBy.value_counts (PNDSPY1122)

  • pandas.core.groupby.groupby.BaseGroupBy.get_group (PNDSPY1123)

  • pandas.core.groupby.groupby.GroupBy.bfill (PNDSPY1127)

  • pandas.core.groupby.groupby.GroupBy.first (PNDSPY1129)

  • pandas.core.groupby.groupby.GroupBy.last (PNDSPY1130)

  • pandas.core.groupby.groupby.GroupBy.quantile (PNDSPY1132)

  • pandas.core.groupby.groupby.GroupBy.resample (PNDSPY1133)

  • pandas.core.groupby.groupby.GroupBy.rolling (PNDSPY1134)

  • pandas.core.groupby.groupby.GroupBy.shift (PNDSPY1135)

  • pandas.core.groupby.groupby.GroupBy.std (PNDSPY1136)

  • pandas.core.groupby.groupby.GroupBy.var (PNDSPY1137)

  • pandas.core.indexes.base.Index.nlevels (PNDSPY1140)

  • pandas.core.indexes.base.Index.sort_values (PNDSPY1142)

  • pandas.core.indexing.IndexingMixin.at (PNDSPY1039)

  • pandas.core.indexing.IndexingMixin.loc (PNDSPY1063)

  • pandas.core.resample.Resampler.ffill (PNDSPY1155)

  • pandas.core.resample.Resampler.first (PNDSPY1157)

  • pandas.core.resample.Resampler.last (PNDSPY1158)

  • pandas.core.resample.Resampler.std (PNDSPY1160)

  • pandas.core.resample.Resampler.var (PNDSPY1161)

  • pandas.core.reshape.merge.merge_asof (PNDSPY1165)

  • pandas.core.reshape.pivot.pivot (PNDSPY1167)

  • pandas.core.series.Series.expanding (PNDSPY1050)

  • pandas.core.series.Series.pct_change (PNDSPY1077)

  • pandas.core.window.ewm.ExponentialMovingWindow.corr (PNDSPY1233)

  • pandas.core.window.ewm.ExponentialMovingWindow.mean (PNDSPY1234)

  • pandas.core.window.ewm.ExponentialMovingWindow.std (PNDSPY1235)

  • pandas.core.window.ewm.ExponentialMovingWindow.sum (PNDSPY1236)

  • pandas.core.window.ewm.ExponentialMovingWindow.var (PNDSPY1237)

  • pandas.core.window.expanding.Expanding.corr (PNDSPY1238)

  • pandas.core.window.expanding.Expanding.max (PNDSPY1240)

  • pandas.core.window.expanding.Expanding.mean (PNDSPY1241)

  • pandas.core.window.expanding.Expanding.min (PNDSPY1242)

  • pandas.core.window.expanding.Expanding.sem (PNDSPY1243)

  • pandas.core.window.expanding.Expanding.std (PNDSPY1244)

  • pandas.core.window.expanding.Expanding.sum (PNDSPY1245)

  • pandas.core.window.expanding.Expanding.var (PNDSPY1246)

  • pandas.core.window.rolling.Window.mean (PNDSPY1256)

  • pandas.core.window.rolling.Window.std (PNDSPY1257)

  • pandas.core.window.rolling.Window.sum (PNDSPY1258)

  • pandas.core.window.rolling.Window.var (PNDSPY1259)

(new) → Direct (74 functions):

  • pandas.core.arrays.datetimes.DatetimeArray.day

  • pandas.core.arrays.datetimes.DatetimeArray.day_of_week

  • pandas.core.arrays.datetimes.DatetimeArray.day_of_year

  • pandas.core.arrays.datetimes.DatetimeArray.dayofweek

  • pandas.core.arrays.datetimes.DatetimeArray.dayofyear

  • pandas.core.arrays.datetimes.DatetimeArray.days_in_month

  • pandas.core.arrays.datetimes.DatetimeArray.daysinmonth

  • pandas.core.arrays.datetimes.DatetimeArray.hour

  • pandas.core.arrays.datetimes.DatetimeArray.is_leap_year

  • pandas.core.arrays.datetimes.DatetimeArray.is_month_end

  • pandas.core.arrays.datetimes.DatetimeArray.is_month_start

  • pandas.core.arrays.datetimes.DatetimeArray.is_quarter_end

  • pandas.core.arrays.datetimes.DatetimeArray.is_quarter_start

  • pandas.core.arrays.datetimes.DatetimeArray.is_year_end

  • pandas.core.arrays.datetimes.DatetimeArray.is_year_start

  • pandas.core.arrays.datetimes.DatetimeArray.isocalendar

  • pandas.core.arrays.datetimes.DatetimeArray.microsecond

  • pandas.core.arrays.datetimes.DatetimeArray.minute

  • pandas.core.arrays.datetimes.DatetimeArray.month

  • pandas.core.arrays.datetimes.DatetimeArray.nanosecond

  • pandas.core.arrays.datetimes.DatetimeArray.quarter

  • pandas.core.arrays.datetimes.DatetimeArray.second

  • pandas.core.arrays.datetimes.DatetimeArray.weekday

  • pandas.core.arrays.datetimes.DatetimeArray.year

  • pandas.core.arrays.timedeltas.TimedeltaArray.days

  • pandas.core.arrays.timedeltas.TimedeltaArray.microseconds

  • pandas.core.arrays.timedeltas.TimedeltaArray.nanoseconds

  • pandas.core.arrays.timedeltas.TimedeltaArray.seconds

  • pandas.core.frame.DataFrame.flags

  • pandas.core.generic.NDFrame.flags

  • pandas.core.generic.NDFrame.rename_axis

  • pandas.core.groupby.groupby.BaseGroupBy.__iter__

  • pandas.core.groupby.groupby.BaseGroupBy.__len__

  • pandas.core.groupby.groupby.GroupBy.sum

  • pandas.core.indexes.base.Index.T

  • pandas.core.indexes.datetimes.DatetimeIndex.date

  • pandas.core.indexes.datetimes.DatetimeIndex.day

  • pandas.core.indexes.datetimes.DatetimeIndex.day_of_week

  • pandas.core.indexes.datetimes.DatetimeIndex.day_of_year

  • pandas.core.indexes.datetimes.DatetimeIndex.dayofweek

  • pandas.core.indexes.datetimes.DatetimeIndex.dayofyear

  • pandas.core.indexes.datetimes.DatetimeIndex.hour

  • pandas.core.indexes.datetimes.DatetimeIndex.is_month_end

  • pandas.core.indexes.datetimes.DatetimeIndex.is_month_start

  • pandas.core.indexes.datetimes.DatetimeIndex.mean

  • pandas.core.indexes.datetimes.DatetimeIndex.microsecond

  • pandas.core.indexes.datetimes.DatetimeIndex.minute

  • pandas.core.indexes.datetimes.DatetimeIndex.month

  • pandas.core.indexes.datetimes.DatetimeIndex.nanosecond

  • pandas.core.indexes.datetimes.DatetimeIndex.normalize

  • pandas.core.indexes.datetimes.DatetimeIndex.quarter

  • pandas.core.indexes.datetimes.DatetimeIndex.second

  • pandas.core.indexes.timedeltas.TimedeltaIndex.total_seconds

  • pandas.core.series.Series.info (PNDSPY1018)

  • pandas.core.series.Series.tolist

  • pandas.core.strings.accessor.StringMethods.capitalize

  • pandas.core.strings.accessor.StringMethods.center

  • pandas.core.strings.accessor.StringMethods.count

  • pandas.core.strings.accessor.StringMethods.islower

  • pandas.core.strings.accessor.StringMethods.istitle

  • pandas.core.strings.accessor.StringMethods.isupper

  • pandas.core.strings.accessor.StringMethods.ljust

  • pandas.core.strings.accessor.StringMethods.lower

  • pandas.core.strings.accessor.StringMethods.match

  • pandas.core.strings.accessor.StringMethods.pad

  • pandas.core.strings.accessor.StringMethods.rjust

  • pandas.core.strings.accessor.StringMethods.title

  • pandas.core.strings.accessor.StringMethods.upper

  • snowpark_pandas.read_snowflake

  • snowpark_pandas.to_dynamic_table

  • snowpark_pandas.to_iceberg

  • snowpark_pandas.to_pandas

  • snowpark_pandas.to_snowflake

  • snowpark_pandas.to_view

(new) → Partial (47 functions):

  • pandas.core.frame.DataFrame.__dataframe__ (PNDSPY1031)

  • pandas.core.frame.DataFrame.pad (PNDSPY1076)

  • pandas.core.generic.NDFrame.align (PNDSPY1033)

  • pandas.core.generic.NDFrame.astype (PNDSPY1038)

  • pandas.core.generic.NDFrame.expanding (PNDSPY1050)

  • pandas.core.generic.NDFrame.ffill (PNDSPY1051)

  • pandas.core.generic.NDFrame.interpolate (PNDSPY1015)

  • pandas.core.generic.NDFrame.pad (PNDSPY1076)

  • pandas.core.generic.NDFrame.resample (PNDSPY1088)

  • pandas.core.generic.NDFrame.rolling (PNDSPY1092)

  • pandas.core.generic.NDFrame.sample (PNDSPY1097)

  • pandas.core.groupby.groupby.GroupBy.all (PNDSPY1124)

  • pandas.core.groupby.groupby.GroupBy.any (PNDSPY1125)

  • pandas.core.groupby.groupby.GroupBy.apply (PNDSPY1126)

  • pandas.core.indexes.base.Index.all (PNDSPY1138)

  • pandas.core.indexes.base.Index.any (PNDSPY1139)

  • pandas.core.indexes.base.Index.reindex (PNDSPY1141)

  • pandas.core.indexes.base.Index.value_counts (PNDSPY1029)

  • pandas.core.indexes.datetimes.DatetimeIndex.tz_convert (PNDSPY1149)

  • pandas.core.indexes.datetimes.DatetimeIndex.tz_localize (PNDSPY1150)

  • pandas.core.series.Series.backfill (PNDSPY1040)

  • pandas.core.series.Series.bfill (PNDSPY1041)

  • pandas.core.series.Series.flags (PNDSPY1181)

  • pandas.core.series.Series.pad (PNDSPY1076)

  • pandas.core.strings.accessor.StringMethods.__getitem__ (PNDSPY1216)

  • pandas.core.strings.accessor.StringMethods.contains (PNDSPY1217)

  • pandas.core.strings.accessor.StringMethods.endswith (PNDSPY1218)

  • pandas.core.strings.accessor.StringMethods.get (PNDSPY1219)

  • pandas.core.strings.accessor.StringMethods.isdigit (PNDSPY1220)

  • pandas.core.strings.accessor.StringMethods.len (PNDSPY1221)

  • pandas.core.strings.accessor.StringMethods.lstrip (PNDSPY1222)

  • pandas.core.strings.accessor.StringMethods.replace (PNDSPY1223)

  • pandas.core.strings.accessor.StringMethods.rstrip (PNDSPY1224)

  • pandas.core.strings.accessor.StringMethods.slice (PNDSPY1225)

  • pandas.core.strings.accessor.StringMethods.split (PNDSPY1226)

  • pandas.core.strings.accessor.StringMethods.startswith (PNDSPY1227)

  • pandas.core.strings.accessor.StringMethods.strip (PNDSPY1228)

  • pandas.core.strings.accessor.StringMethods.translate (PNDSPY1229)

  • pandas.core.window.rolling.Rolling.corr (PNDSPY1247)

  • pandas.core.window.rolling.Rolling.max (PNDSPY1249)

  • pandas.core.window.rolling.Rolling.mean (PNDSPY1250)

  • pandas.core.window.rolling.Rolling.min (PNDSPY1251)

  • pandas.core.window.rolling.Rolling.sem (PNDSPY1252)

  • pandas.core.window.rolling.Rolling.std (PNDSPY1253)

  • pandas.core.window.rolling.Rolling.sum (PNDSPY1254)

  • pandas.core.window.rolling.Rolling.var (PNDSPY1255)

  • pandas.io.json._json.read_json (PNDSPY1260)

Direct → Partial (12 functions):

  • pandas.core.frame.DataFrame.T (PNDSPY1030)

  • pandas.core.frame.DataFrame.any (PNDSPY1035)

  • pandas.core.frame.DataFrame.where (PNDSPY1114)

  • pandas.core.groupby.generic.DataFrameGroupBy.agg (PNDSPY1116)

  • pandas.core.indexes.datetimes.DatetimeIndex.round (PNDSPY1147)

  • pandas.core.reshape.tile.qcut (PNDSPY1170)

  • pandas.core.series.Series.astype (PNDSPY1038)

  • pandas.core.series.Series.groupby (PNDSPY1184)

  • pandas.core.series.Series.le (PNDSPY1186)

  • pandas.core.series.Series.loc (PNDSPY1063)

  • pandas.io.parquet.read_parquet (PNDSPY1261)

  • pandas.io.parsers.readers.read_csv (PNDSPY1262)

Partial → Direct (5 functions):

  • pandas.core.indexes.datetimes.DatetimeIndex.is_leap_year

  • pandas.core.indexes.datetimes.DatetimeIndex.is_quarter_end

  • pandas.core.indexes.datetimes.DatetimeIndex.is_quarter_start

  • pandas.core.indexes.datetimes.DatetimeIndex.is_year_end

  • pandas.core.indexes.datetimes.DatetimeIndex.is_year_start

Rename → Partial (4 functions):

  • pandas.core.frame.DataFrame.divide (PNDSPY1046)

  • pandas.core.frame.DataFrame.multiply (PNDSPY1071)

  • pandas.core.frame.DataFrame.subtract (PNDSPY1105)

  • pandas.core.series.Series.divide (PNDSPY1178)

수정됨

  • Fixed the “How to read through the scores” link on the assessment and conversion results page to ensure it correctly opens the readiness score documentation.

Version 3.0.0 (Feb 12, 2026)

Application & CLI Version: 3.0.0

Included SMA Core Version

  • Snowpark Conversion Core: 8.1.55

Engine Release Notes

Improvements

  • License-Free Conversion Mode: A license or access code is no longer required to run SMA in Conversion mode.

  • Project Options Page: A new Project Options page has been introduced to present the available workflows in the application, including “Code Analysis and Conversion”.

  • Technical Discovery Relocation: The Technical Discovery section has been moved to the Project Creation page for a more streamlined project setup experience.

  • Simplified Conversion Setup: The Conversion Setup page has been updated and no longer requires a license or access code.

  • Project File Extension: The project file extension has changed from .snowma to .snowct.

  • Updated User Interface: The user interface has been refreshed to align with the SnowConvert AI look and feel.

Version 2.11.1 (Jan 30, 2026)

Application & CLI Version: 2.11.1

Included SMA Core Version

  • Snowpark Conversion Core: 8.1.55

Engine Release Notes

추가됨

  • Added SQL Language to the DetailedReport doc file.

  • Added SQL configuration cell at the beginning of a converted Databricks-to-Jupyter transformation to be compatible with Snowflake notebooks.

변경됨

  • Updated the %run magic command transformation to append .ipynb extension to notebook paths.

    • For unquoted paths: %run ./myNotebook transforms to %run ./myNotebook.ipynb

    • For quoted paths: %run "./myNotebook" transforms to %run "./myNotebook.ipynb"

  • Scala code in notebook cells will now be commented in a python cell during a notebook migration.

  • Updated the conversion of dbutils.run to the sfutils.notebook.run function to handle notebook execution calls.

  • Bumped the supported versions of Snowpark Python API and Snowpark Pandas API from 1.40.0 to 1.41.0.

  • Updated the mapping status for the following Pandas functions from NotSupported to Partial:

    • pandas.core.frame.DataFrame.aggmodin.pandas.DataFrame.agg

    • pandas.core.frame.DataFrame.interpolatemodin.pandas.DataFrame.interpolate

    • pandas.core.reshape.encoding.get_dummiesmodin.pandas.general.get_dummies

    • pandas.core.series.Series.aggmodin.pandas.Series.agg

    • pandas.core.series.Series.interpolatemodin.pandas.Series.interpolate

수정됨

  • SMA now will rename .hql (Hive SQL) files to .sql after conversion.

  • The implicit cell for a DBX Scala Notebook when converting to Snowflake will be a python cell with an EWI. The Scala code will be commented out.

  • Python cells from DBX SQL Notebooks will preserve the language metadata.

제거됨

  • Removed the previous %run transformation in DBX notebooks that generated spark.sql("EXECUTE NOTEBOOK ...") SQL statements.

  • The SnowConvert MissingObjects report was absorbed by the MissingObjectReference report. The MissingObjects report will no longer be generated.

Version 2.11.0 (Jan 9, 2026)

Application & CLI Version: 2.11.0

Included SMA Core Version

  • Snowpark Conversion Core: 8.1.43

Included SnowConvert AI Version

Engine Release Notes

추가됨

  • Enhanced Notebook Setup for Assessment: When running an assessment on Databricks notebooks, a Snowpark Connect session is now automatically added to the first cell to simplify your setup.

  • Automatic Snowpark Connect Conversion: The tool now automatically converts both SparkSession and SparkContext initializations in Python code to their equivalent Snowpark Connect sessions.

  • Improved Error Identification:

    • Added a new warning code, SPRKCNTPY4000, to clearly flag any SparkContext elements that are not yet supported by Snowpark Connect.

    • The tool now automatically detects and flags unsupported Databricks utility calls (dbutils API) with the new warning code SPRKDBX1004 during conversion.

  • More Detailed Reporting:

    • The SparkUsagesInventory.csv report now includes a new column called IS_SNOWPARK_CONNECT_TOOL_SUPPORTED

    • This new column is to clearly indicate if a Spark element is supported directly by Snowpark Connect, or supported throught an SMA transformation.

    • The Snowpark Connect readiness score calculation has been updated to use the new IS_SNOWPARK_CONNECT_TOOL_SUPPORTED column in the SparkUsagesInventory.csv report.

  • Next-Generation Notebook Support: Enhanced support for the VNext Snowflake Notebooks format when converting Databricks or Jupyter notebooks.

    • Full VNext Compatibility: The SMA can now generate output files that fully adhere to the VNext Snowflake Notebooks standard, regardless of whether the source was a Databricks or a previous-generation Jupyter notebook.

    • Smarter Language Handling: The conversion engine has been updated with enhanced logic to accurately detect and manage the specific language (such as Python or Scala) within each individual notebook cell. This allows for more precise and reliable cell-by-cell conversion.

    • Enhanced Metadata for Cells: The process now correctly incorporates necessary language and type metadata at the cell level during generation, which is essential for VNext Notebooks to function as expected.

변경됨

  • Simplified Python Code: For Snowpark Connect, unnecessary .sparkContext references in Python method calls are now removed to streamline your code.

  • Clearer Warning Codes: Snowpark Connect warning codes are now renamed to include language-specific prefixes (e.g., SPRKCNTPY for Python, SPRKCNTSCL for Scala) for easier error identification.

  • More Accurate Notebook Conversions: The conversion process for notebooks has been improved to correctly distinguish between Databricks and Jupyter formats, preventing incorrect modifications.

수정됨

  • Fixed a bug in the artifact dependency inventory that incorrectly reported .options() configuration as a data source.

Desktop Release Notes

추가됨

  • Technical Discovery View: A new Technical Discovery View is now available in the desktop application.

  • SMA Assessment AI: SMA desktop application is now directly integrated with an optional LLM interface.

    • Ask questions about your assessment results

    • Get help with how to approach the migration

    • Connect and deploy your assessment results directly into your Snowflake account.

변경됨

  • The Command Line Interface (CLI) parameter for controlling Jupyter conversion has been updated from --enableJupyter to --disableJupyterConversion for clearer functionality.

Version 2.10.5 (Dec 3rd, 2025)

Application & CLI Version: 2.10.5

포함된 SMA 핵심 버전

  • Snowpark Conversion Core: 8.1.26

Included SnowConvert AI Version

Engine Release Notes

추가됨

  • The Execution Summary section of the DetailedReport.docx now indicates whether the SMA was run in Assessment or Conversion mode.

변경됨

  • Bumped the supported versions of Snowpark Python API and Snowpark Pandas API from 1.39.0 to 1.40.0.

PySpark Function Mapping Updates:

NotSupported to Rename:

  • pyspark.sql.functions.unhexsnowflake.snowpark.functions.hex_decode_binary

Direct to Rename:

  • pyspark.sql.functions.greatestsnowflake.snowpark.functions.greatest_ignore_nulls

  • pyspark.sql.functions.leastsnowflake.snowpark.functions.least_ignore_nulls

NotDefined to Rename:

  • pyspark.sql.functions.bool_orsnowflake.snowpark.functions.boolor_agg

  • pyspark.sql.functions.charsnowflake.snowpark.functions.chr

NotDefined to Direct:

  • pyspark.sql.functions.nullifsnowflake.snowpark.functions.nullif

  • pyspark.sql.functions.nvl2snowflake.snowpark.functions.nvl2

Snowpark Pandas Function Mapping Updates:

NotSupported to Partial:

  • modin.pandas.DataFrame.querysnowflake.snowpark.pandas.core.frame.DataFrame.query

  • Added a new EWI PNDSPY1012 to indicate that modin.pandas.DataFrame.query does not support MultiIndex. The following example scenario illustrating this limitation is also included in the EWI documentation.

    from snowflake.snowpark.modin import plugin
    import modin.pandas as pd # Snowpark pandas
    
    # Create a DataFrame with single-level index
    data = {
        'name': ['Alice', 'Bob', 'Charlie', 'David', 'Eve', 'Frank'],
        'age': [25, 30, 35, 28, 32, 45],
        'salary': [50000, 60000, 75000, 55000, 80000, 90000],
        'department': ['Sales', 'IT', 'HR', 'Sales', 'IT', 'HR']
    }
    df = pd.DataFrame(data)
    
    # Set a single-level index
    df = df.set_index('name')
    print("DataFrame with single-level index:")
    print(df)
    
    # Use query() - This works fine!
    #EWI: PNDSPY1012 => pandas.core.frame.DataFrame.query does not support DataFrames that have a row MultiIndex. Check Snowpark Pandas documentation for more details.
    result = df.query("age > 30 and salary < 85000")
    
    # Create a DataFrame with MultiIndex on rows
    data = {
        'A': [1, 2, 3, 4, 5, 6],
        'B': [10, 20, 30, 40, 50, 60],
        'C': ['x', 'y', 'x', 'y', 'x', 'y']
    }
    df = pd.DataFrame(data)
    
    # Create MultiIndex
    df = df.set_index([
        pd.Index(['group1', 'group1', 'group2', 'group2', 'group3', 'group3']),
        pd.Index(['a', 'b', 'a', 'b', 'a', 'b'])
    ])
    df.index.names = ['group', 'subgroup']
    
    # This will ERROR in Snowpark pandas!
    #EWI: PNDSPY1012 => pandas.core.frame.DataFrame.query does not support DataFrames that have
    

    Recommended fix: If the DataFrame contains a MultiIndex, it is necessary to validate the behavior of the query() method in Snowpark pandas. Ensure that the DataFrame structure is compatible with Snowpark pandas’ limitations, as MultiIndex rows are not supported. Consider restructuring the DataFrame to use a single-level index or alternative filtering methods.

  • Updated all documentation links in the DetailedReport.docx to point to the official Snowflake documentation, replacing the legacy Snowpark Migration Accelerator site.

  • Updated the Snowpark Connect readiness score descriptions in the DetailedReport.docx to match the SMA UI.

  • Usages of pyspark.sql.window.WindowSpec.orderBy are now reported as supported by Snowpark Connect.

수정됨

  • Fixed broken internal links in the DetailedReport.docx to ensure proper navigation between document sections.

  • Added a CellId column to the issues inventory to easily identify the location of EWIs within notebook files.

Version 2.10.4 (Nov 18, 2025)

Application & CLI Version: 2.10.4

포함된 SMA 핵심 버전

  • Snowpark Conversion Core: 8.1.8

Engine Release Notes

수정됨

  • Fixed an issue where the SMA generated corrupted Databricks notebook files in the output directory during Assessment mode execution.

  • Fixed an issue where the SMA would crash if the input directory contained folders named “SMA_ConvertedNotebooks”.

Version 2.10.3 (Oct 30, 2025)

Application & CLI Version: 2.10.3

포함된 SMA 핵심 버전

  • Snowpark Conversion Core: 8.1.7

Engine Release Notes

추가됨

  • Added the Snowpark Connect readiness score. This new score measures the percentage of Spark API references in your codebase that are supported by Snowpark Connect for Spark.

  • Added support for SQL embedded migration for literal string concatenations assigned to a local variable in the same scope of execution.

    • Included scenarios now include: .. code-block:: python

      sqlStat = “SELECT colName “ + “FROM myTable” session.sql(sqlStat)

변경됨

수정됨

  • Fixed a code issue that caused inner project configuration files (e.g., pom.xml, build.sbt, build.gradle) to be incorrectly placed in the root of the output directory instead of the correct inner directories after migration.

Desktop Release Notes

추가됨

  • Added the Snowpark Connect readiness score and updated the assessment execution flow.

    • When running the application in assessment mode, only the Snowpark Connect readiness score is now displayed.

    • When running the application in conversion mode, the Snowpark API readiness score is displayed (the Snowpark Connect Readiness will not be shown).

변경됨

Updated all in-application documentation links to point to the official Snowflake documentation, replacing the legacy SnowConvert site.

Version 2.10.2 (Oct 27, 2025)

Application & CLI Version 2.10.2

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 8.0.73

수정됨

  • Fixed an issue where the Snowpark Migration Accelerator failed converting DBC files into Jupyter Notebooks properly.

버전 2.10.1(2025년 10월 23일)

애플리케이션 및 CLI 버전 2.10.1

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 8.0.72

추가됨

  • Snowpark Scala v1.17.0에 대한 지원이 추가되었습니다.

‘NotSupported’에서 ‘Direct’로 변경:

데이터 세트:

  • org.apache.spark.sql.Dataset.isEmptycom.snowflake.snowpark.DataFrame.isEmpty

행:

  • org.apache.spark.sql.Row.mkStringcom.snowflake.snowpark.Row.mkString

StructType:

  • org.apache.spark.sql.types.StructType.fieldNamescom.snowflake.snowpark.types.StructType.fieldNames

‘NotSupported’에서 ‘Rename’으로 변경:

함수:

  • org.apache.spark.functions.flattencom.snowflake.snowpark.functions.array_flatten

‘Direct’에서 ‘Rename’으로 변경:

함수:

  • org.apache.spark.functions.to_datecom.snowflake.snowpark.functions.try_to_date

  • org.apache.spark.functions.to_timestampcom.snowflake.snowpark.functions.try_to_timestamp

‘DirectHelper’에서 ‘Rename’으로 변경:

함수:

  • org.apache.spark.sql.functions.concat_wscom.snowflake.snowpark.functions.concat_ws_ignore_nulls

‘NotDefined’에서 ‘Direct’로 변경:

함수:

  • org.apache.spark.functions.try_to_timestampcom.snowflake.snowpark.functions.try_to_timestamp

  • 이제 SQL 문 리터럴이 로컬 변수에 할당되면 임베드된 SQL이 마이그레이션됩니다.

예: sqlStat = “SELECT colName FROM myTable” session.sql(sqlStat)

  • 이제 리터럴 문자열 연결에 임베드된 SQL이 지원됩니다.

예: session.sql(“SELECT colName “ + “FROM myTable”)

변경됨

  • Snowpark Python API 및 Snowpark Pandas API의 지원되는 버전이 1.36.0에서 1.39.0으로 업데이트되었습니다.

  • 다음 PySpark xpath 함수에 대한 매핑 상태가 NotSupported에서 EWI SPRKPY1103를 통해 Direct로 업데이트되었습니다.

    • pyspark.sql.functions.xpath

    • pyspark.sql.functions.xpath_boolean

    • pyspark.sql.functions.xpath_double

    • pyspark.sql.functions.xpath_float

    • pyspark.sql.functions.xpath_int

    • pyspark.sql.functions.xpath_long

    • pyspark.sql.functions.xpath_number

    • pyspark.sql.functions.xpath_short

    • pyspark.sql.functions.xpath_string

  • 다음 PySpark 요소에 대한 매핑 상태가 NotDefined에서 Direct로 업데이트되었습니다.

    • pyspark.sql.functions.bit_andsnowflake.snowpark.functions.bitand_agg

    • pyspark.sql.functions.bit_orsnowflake.snowpark.functions.bitor_agg

    • pyspark.sql.functions.bit_xorsnowflake.snowpark.functions.bitxor_agg

    • pyspark.sql.functions.getbitsnowflake.snowpark.functions.getbit

  • 다음 Pandas 요소에 대한 매핑 상태가 NotSupported에서 Direct로 업데이트되었습니다.

    • pandas.core.indexes.base.Indexmodin.pandas.Index

    • pandas.core.indexes.base.Index.get_level_valuesmodin.pandas.Index.get_level_values

  • 다음 PySpark 함수에 대한 매핑 상태가 NotSupported에서 Rename으로 업데이트되었습니다.

    • pyspark.sql.functions.nowsnowflake.snowpark.functions.current_timestamp

수정됨

  • 이름 바꾸기가 있을 때 Scala가 가져오기를 마이그레이션하지 않는 문제가 수정되었습니다.

    예:

    소스 코드:

    package com.example.functions
    import org.apache.spark.sql.functions.{to_timestamp, lit}
    object ToTimeStampTest extends App {
       to_timestamp(lit("sample"))
       to_timestamp(lit("sample"), "yyyy-MM-dd")
     }
    

    출력 코드:

    package com.example.functions
    import com.snowflake.snowpark.functions.{try_to_timestamp, lit}
    import com.snowflake.snowpark_extensions.Extensions._
    import com.snowflake.snowpark_extensions.Extensions.functions._
    object ToTimeStampTest extends App {
       try_to_timestamp(lit("sample"))
       try_to_timestamp(lit("sample"), "yyyy-MM-dd")
     }
    

버전 2.10.0(2025년 9월 24일)

애플리케이션 및 CLI 버전 2.10.0

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 8.0.62

추가됨

  • Python 형식 보간이 임베드된 SQL을 마이그레이션하는 기능이 추가되었습니다.

  • Added support for DataFrame.select and DataFrame.sort transformations for greater data processing flexibility.

변경됨

  • Snowpark Python API 및 Snowpark Pandas API의 지원되는 버전이 1.36.0으로 업데이트되었습니다.

  • Updated the mapping status of pandas.core.frame.DataFrame.boxplot from Not Supported to Direct.

  • Updated the mapping status of DataFrame.select, Dataset.select, DataFrame.sort and Dataset.sort from Direct to Transformation.

  • Snowpark Scala allows a sequence of columns to be passed directly to the select and sort functions, so this transformation changes all the usages such as df.select(cols: _*) to df.select(cols) and df.sort(cols: _*) to df.sort(cols).

  • Python AST 및 Parser 버전이 149.1.9로 업데이트되었습니다.

  • Pandas 함수의 상태가 Direct로 업데이트되었습니다.

    • pandas.core.frame.DataFrame.to_excel

    • pandas.core.series.Series.to_excel

    • pandas.io.feather_format.read_feather

    • pandas.io.orc.read_orc

    • pandas.io.stata.read_stata

  • Updated the status for pyspark.sql.pandas.map_ops.PandasMapOpsMixin.mapInPandas to workaround using the EWI SPRKPY1102.

수정됨

  • 연결된 메서드 호출을 사용할 때 SqlEmbedded 변환에 영향을 미치는 문제가 수정되었습니다.

  • 뒷부분을 잃지 않도록 방지하기 위해 새 PyLiteralSql을 사용하여 관련된 PySqlExpr 변환이 수정되었습니다.

  • 도구의 견고성과 안정성을 개선하기 위해 내부 안정성 문제가 해결되었습니다.

버전 2.7.7(2025년 8월 28일)

애플리케이션 및 CLI 버전 2.7.7

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 8.0.46

추가됨

  • 새로운 Pandas EWI 설명서 PNDSPY1011이 추가되었습니다.

  • 다음 Pandas 함수에 대한 지원이 추가되었습니다.

    • pandas.core.algorithms.unique

    • pandas.core.dtypes.missing.isna

    • pandas.core.dtypes.missing.isnull

    • pandas.core.dtypes.missing.notna

    • pandas.core.dtypes.missing.notnull

    • pandas.core.resample.Resampler.count

    • pandas.core.resample.Resampler.max

    • pandas.core.resample.Resampler.mean

    • pandas.core.resample.Resampler.median

    • pandas.core.resample.Resampler.min

    • pandas.core.resample.Resampler.size

    • pandas.core.resample.Resampler.sum

    • pandas.core.arrays.timedeltas.TimedeltaArray.total_seconds

    • pandas.core.series.Series.get

    • pandas.core.series.Series.to_frame

    • pandas.core.frame.DataFrame.assign

    • pandas.core.frame.DataFrame.get

    • pandas.core.frame.DataFrame.to_numpy

    • pandas.core.indexes.base.Index.is_unique

    • pandas.core.indexes.base.Index.has_duplicates

    • pandas.core.indexes.base.Index.shape

    • pandas.core.indexes.base.Index.array

    • pandas.core.indexes.base.Index.str

    • pandas.core.indexes.base.Index.equals

    • pandas.core.indexes.base.Index.identical

    • pandas.core.indexes.base.Index.unique

다음 Spark Scala 함수에 대한 지원이 추가되었습니다.

  • org.apache.spark.sql.functions.format_number

  • org.apache.spark.sql.functions.from_unixtime

  • org.apache.spark.sql.functions.instr

  • org.apache.spark.sql.functions.months_between

  • org.apache.spark.sql.functions.pow

  • org.apache.spark.sql.functions.to_unix_timestamp

  • org.apache.spark.sql.Row.getAs

변경됨

  • SMA에서 지원하는 Snowpark Pandas API 버전이 1.33.0으로 업데이트되었습니다.

  • SMA에서 지원하는 Snowpark Scala API 버전이 1.16.0으로 업데이트되었습니다.

  • pyspark.sql.group.GroupedData.pivot에 대한 매핑 상태가 Transformation에서 Direct로 업데이트되었습니다.

  • org.apache.spark.sql.Builder.master에 대한 매핑 상태가 NotSupported에서 Transformation으로 업데이트되었습니다. 이 변환은 코드 변환 중에 식별된 이 요소의 모든 사용을 제거합니다.

  • org.apache.spark.sql.types.StructType.fieldIndex에 대한 매핑 상태가 NotSupported에서 Direct로 업데이트되었습니다.

  • org.apache.spark.sql.Row.fieldIndex에 대한 매핑 상태가 NotSupported에서 Direct로 업데이트되었습니다.

  • org.apache.spark.sql.SparkSession.stop에 대한 매핑 상태가 NotSupported에서 Rename으로 업데이트되었습니다. 코드 변환 중에 식별된 이 요소의 모든 사용은 com.snowflake.snowpark.Session.close로 이름이 바뀝니다.

  • org.apache.spark.sql.DataFrame.unpersist 및 org.apache.spark.sql.Dataset.unpersist에 대한 매핑 상태가 NotSupported에서 Transformation으로 업데이트되었습니다. 이 변환은 코드 변환 중에 식별된 이 요소의 모든 사용을 제거합니다.

수정됨

  • 제거된 꼬리 함수에서 연속 백슬래시가 수정되었습니다.

  • Fix the LIBRARY_PREFIX column in the ConversionStatusLibraries.csv file to use the right identifier for scikit-learn library family (scikit-*).

  • 여러 라인으로 그룹화된 연산을 구문 분석하지 못하는 버그가 수정되었습니다.

버전 2.9.0(2025년 9월 9일)

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 8.0.53

추가됨

  • The following mappings are now performed for org.apache.spark.sql.Dataset[T]:

    • org.apache.spark.sql.Dataset.union is now com.snowflake.snowpark.DataFrame.unionAll

    • org.apache.spark.sql.Dataset.unionByName is now com.snowflake.snowpark.DataFrame.unionAllByName

  • Added support for org.apache.spark.sql.functions.broadcast as a transformation.

변경됨

  • Increased the supported Snowpark Python API version for SMA from 1.27.0 to 1.33.0.

  • The status for the pyspark.sql.function.randn function has been updated to Direct.

수정됨

  • Resolved an issue where org.apache.spark.SparkContext.parallelize was not resolving and now supports it as a transformation.

  • Fixed the Dataset.persist transformation to work with any type of Dataset, not just Dataset[Row].

버전 2.7.6(2025년 7월 17일)

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 8.0.30

추가됨

  • Spark DataReader 메서드에 대한 매핑이 조정되었습니다.

  • DataFrame.union is now DataFrame.unionAll.

  • DataFrame.unionByName is now DataFrame.unionAllByName.

  • 아티팩트 인벤토리에 다중 수준 아티팩트 종속성 열이 추가되었습니다.

  • Added new Pandas EWIs documentation, from PNDSPY1005 to PNDSPY1010.

  • Added a specific EWI for pandas.core.series.Series.apply.

변경됨

  • Bumped the version of Snowpark Pandas API supported by the SMA from 1.27.0 to 1.30.0.

수정됨

  • SQL 준비도 점수를 획득하기 위한 수식에서 값이 누락되는 문제가 수정되었습니다.

  • 일부 Pandas 요소에 PySpark의 기본 EWI 메시지가 표시되는 버그가 수정되었습니다.

버전 2.7.5(2025년 7월 2일)

애플리케이션 및 CLI버전 2.7.5

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 8.0.19

변경됨

  • Refactored Pandas Imports: Pandas imports now use modin.pandas instead of snowflake.snowpark.modin.pandas.

  • Improved dbutils and Magic Commands Transformation:

    • A new sfutils.py file is now generated, and all dbutils prefixes are replaced with sfutils.

    • For Databricks (DBX) notebooks, an implicit import for sfutils is automatically added.

    • The sfutils module simulates various dbutils methods, including file system operations (dbutils.fs) via a defined Snowflake FileSystem (SFFS) stage, and handles notebook execution (dbutils.notebook.run) by transforming it to EXECUTE NOTEBOOK SQL functions.

    • dbutils.notebook.exit is removed as it is not required in Snowflake.

수정됨

  • Updates in SnowConvert Reports: SnowConvert reports now include the CellId column when instances originate from SMA, and the FileName column displays the full path.

  • Updated Artifacts Dependency for SnowConvert Reports: The SMA’s artifact inventory report, which was previously impacted by the integration of SnowConvert, has been restored. This update enables the SMA tool to accurately capture and analyze Object References and Missing Object References directly from SnowConvert reports, thereby ensuring the correct retrieval of SQL dependencies for the inventory.

버전 2.7.4(2025년 6월 26일)

애플리케이션 및 CLI버전 2.7.4

데스크톱 앱

추가됨

  • 원격 분석 개선 사항이 추가되었습니다.

수정됨

  • 변환 설정 팝업 및 Pandas EWIs의 문서 링크가 수정되었습니다.

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 8.0.16

추가됨

  • Spark XML을 Snowpark로 변환

  • SQL 소스 언어의 Databricks SQL 옵션

  • JDBC 읽기 연결 변환.

변경됨

  • 모든 SnowConvert 보고서가 백업 Zip 파일에 복사됩니다.

  • The folder is renamed from SqlReports to SnowConvertReports.

  • SqlFunctionsInventory is moved to the folder Reports.

  • 모든 SnowConvert 보고서가 원격 분석으로 전송됩니다.

수정됨

  • SQL 준비도 점수의 비결정론적 문제가 수정되었습니다.

  • 데스크톱 충돌을 유발하는 치명적인 위양성 결과가 수정되었습니다.

  • 아티팩트 종속성 보고서에 SQL 오브젝트가 표시되지 않는 문제가 수정되었습니다.

버전 2.7.2(2025년 6월 10일)

애플리케이션 및 CLI 버전 2.7.2

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 8.0.2

수정됨

  • 이전에 보고된 대로 최신 Windows OS에서 SMA 실행과 관련된 문제가 해결되었습니다. 이 수정 사항으로 버전 2.7.1에서 발생한 문제가 해결되었습니다.

버전 2.7.1(2025년 6월 9일)

애플리케이션 및 CLI 버전 2.7.1

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 8.0.1

추가됨

The Snowpark Migration Accelerator (SMA) now orchestrates` SnowConvert <https://docs.snowconvert.com/sc/general/about>`_ to process SQL found in user workloads, including embedded SQL in Python / Scala code, Notebook SQL cells, .sql files, and .hql files.

이제 SnowConvert는 이전의 SMA 기능을 개선합니다.

SQL 보고서의 새 폴더에는 SnowConvert에서 생성된 보고서가 포함되어 있습니다.

Known Issues

SQL 보고서용 이전 SMA 버전은 다음의 경우 비어 있는 상태로 표시됩니다.

  • For Reports/SqlElementsInventory.csv, partially covered by the Reports/SqlReports/Elements.yyyymmdd.hhmmss.csv.

  • For Reports/SqlFunctionsInventory.csv refer to the new location with the same name at Reports/SqlReports/SqlFunctionsInventory.csv

아티팩트 종속성 인벤토리:

  • In the ArtifactDependencyInventory the column for the SQL Object will appear empty

버전 2.6.10(2025년 5월 5일)

애플리케이션 및 CLI 버전 2.6.10

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 7.4.0

수정됨

  • Fixed wrong values in the ‘checkpoints.json’ file.

    • ‘sample’ 값에 소수(정수 값의 경우)와 따옴표가 없습니다.

    • ‘entryPoint’ 값에 슬래시 대신 점이 있고 파일 확장명이 누락되었습니다.

  • ‘DBX 노트북을 Snowflake 노트북으로 변환’ 설정의 기본값이 TRUE로 업데이트되었습니다.

버전 2.6.8(2025년 4월 28일)

애플리케이션 및 CLI 버전 2.6.8

데스크톱 앱

  • Added checkpoints execution settings mechanism recognition.

  • Added a mechanism to collect DBX magic commands into DbxElementsInventory.csv

  • Added ‘checkpoints.json’ generation into the input directory.

  • Added a new EWI for all not supported magic command.

  • Added the collection of dbutils into DbxElementsInventory.csv from scala source notebooks

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 7.2.53

변경됨

  • Updates made to handle transformations from DBX Scala elements to Jupyter Python elements, and to comment the entire code from the cell.

  • Updates made to handle transformations from dbutils.notebook.run and “r” commands, for the last one, also comment out the entire code from the cell.

  • Updated the name and the letter of the key to make the conversion of the notebook files.

수정됨

  • Fixed the bug that was causing the transformation of DBX notebooks into .ipynb files to have the wrong format.

  • Fixed the bug that was causing .py DBX notebooks to not be transformable into .ipynb files.

  • Fixed a bug that was causing comments to be missing in the output code of DBX notebooks.

  • Fixed a bug that was causing raw Scala files to be converted into ipynb files.

버전 2.6.7(2025년 4월 21일)

애플리케이션 및 CLI 버전 2.6.7

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 7.2.42

변경됨

EntryPoints 열을 채우도록 DataFramesInventory가 업데이트되었습니다

버전 2.6.6(2025년 4월 7일)

애플리케이션 및 CLI 버전 2.6.6

데스크톱 앱

추가됨

  • UI 결과 페이지의 DBx EWI 링크 업데이트

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 7.2.39

추가됨

  • Added Execution Flow inventory generation.

  • 모든 DBx 노트북 변환에 암시적 세션 설정이 추가되었습니다.

변경됨

  • 이름이 DbUtilsUsagesInventory.csv에서 DbxElementsInventory.csv로 변경되었습니다.

수정됨

  • 유형 힌트 뒤에 백슬래시가 올 때 구문 분석 오류를 유발하는 버그가 수정되었습니다.

  • 점으로 시작하지 않고 별표로 시작하는 상대 가져오기가 수정되었습니다.

버전 2.6.5(2025년 3월 27일)

애플리케이션 및 CLI 버전 2.6.5

데스크톱 앱

추가됨

  • Added a new conversion setting toggle to enable or disable Sma-Checkpoints feature.

  • Fix report issue to not crash when post api returns 500

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 7.2.26

추가됨

  • Added generation of the checkpoints.json file into the output folder based on the DataFramesInventory.csv.

  • Added “disableCheckpoints” flag into the CLI commands and additional parameters of the code processor.

  • Added a new replacer for Python to transform the dbutils.notebook.run node.

  • Added new replacers to transform the magic %run command.

  • Added new replacers (Python and Scala) to remove the dbutils.notebook.exit node.

  • Added Location column to artifacts inventory.

변경됨

  • 솔루션의 일부에서 사용되는 정규화된 디렉터리 구분 기호가 리팩터링되었습니다.

  • DBC 추출 작업 폴더 이름 처리를 한곳에서 관리하도록 개선되었습니다.

  • Updated Snowpark and Pandas version to v1.27.0

  • 아티팩트 인벤토리 열이 다음과 같이 업데이트되었습니다.

    • 이름 -> 종속성

    • 파일 -> FileId

    • Status -> Status_detail

  • Added new column to the artifacts inventory:

    • Success

수정됨

  • Dataframes inventory was not being uploaded to the stage correctly.

버전 2.6.4(2025년 3월 12일)

애플리케이션 및 CLI 버전 2.6.4

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 7.2.0

추가됨

  • An Artifact Dependency Inventory

  • A replacer and EWI for pyspark.sql.types.StructType.fieldNames method to snowflake.snowpark.types.StructType.fieldNames attribute.

  • 다음 PySpark 함수는 다음 상태입니다.

Direct Status

  • pyspark.sql.functions.bitmap_bit_position

  • pyspark.sql.functions.bitmap_bucket_number

  • pyspark.sql.functions.bitmap_construct_agg

  • pyspark.sql.functions.equal_null

  • pyspark.sql.functions.ifnull

  • pyspark.sql.functions.localtimestamp

  • pyspark.sql.functions.max_by

  • pyspark.sql.functions.min_by

  • pyspark.sql.functions.nvl

  • pyspark.sql.functions.regr_avgx

  • pyspark.sql.functions.regr_avgy

  • pyspark.sql.functions.regr_count

  • pyspark.sql.functions.regr_intercept

  • pyspark.sql.functions.regr_slope

  • pyspark.sql.functions.regr_sxx

  • pyspark.sql.functions.regr_sxy

  • pyspark.sql.functions.regr

NotSupported

  • pyspark.sql.functions.map_contains_key

  • pyspark.sql.functions.position

  • pyspark.sql.functions.regr_r2

  • pyspark.sql.functions.try_to_binary

다음 Pandas 함수는 다음 상태입니다.

  • pandas.core.series.Series.str.ljust

  • pandas.core.series.Series.str.center

  • pandas.core.series.Series.str.pad

  • pandas.core.series.Series.str.rjust

다음 Pyspark 함수는 다음 상태로 업데이트됩니다.

WorkAround에서 Direct로

  • pyspark.sql.functions.acosh

  • pyspark.sql.functions.asinh

  • pyspark.sql.functions.atanh

  • pyspark.sql.functions.instr

  • pyspark.sql.functions.log10

  • pyspark.sql.functions.log1p

  • pyspark.sql.functions.log2

NotSupported에서 Direct로

  • pyspark.sql.functions.bit_length

  • pyspark.sql.functions.cbrt

  • pyspark.sql.functions.nth_value

  • pyspark.sql.functions.octet_length

  • pyspark.sql.functions.base64

  • pyspark.sql.functions.unbase64

다음 Pandas 함수는 다음 상태로 업데이트되었습니다.

NotSupported에서 Direct로

  • pandas.core.frame.DataFrame.pop

  • pandas.core.series.Series.between

  • pandas.core.series.Series.pop

버전 2.6.3(2025년 3월 6일)

애플리케이션 및 CLI 버전 2.6.3

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 7.1.13

추가됨

  • 새 인벤토리 생성을 위한 csv 생성기 클래스가 추가되었습니다.

  • Added “full_name” column to import usages inventory.

  • Added transformation from pyspark.sql.functions.concat_ws to snowflake.snowpark.functions._concat_ws_ignore_nulls.

  • Added logic for generation of checkpoints.json.

  • 인벤토리가 추가되었습니다.

    • DataFramesInventory.csv.

    • CheckpointsInventory.csv

버전 2.6.0(2025년 2월 21일)

애플리케이션 및 CLI 버전 2.6.0

데스크톱 앱

  • 라이선스 계약이 업데이트되었으며 동의가 필요합니다.

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 7.1.2

추가됨

Updated the mapping status for the following PySpark elements, from NotSupported to Direct

  • pyspark.sql.types.ArrayType.json

  • pyspark.sql.types.ArrayType.jsonValue

  • pyspark.sql.types.ArrayType.simpleString

  • pyspark.sql.types.ArrayType.typeName

  • pyspark.sql.types.AtomicType.json

  • pyspark.sql.types.AtomicType.jsonValue

  • pyspark.sql.types.AtomicType.simpleString

  • pyspark.sql.types.AtomicType.typeName

  • pyspark.sql.types.BinaryType.json

  • pyspark.sql.types.BinaryType.jsonValue

  • pyspark.sql.types.BinaryType.simpleString

  • pyspark.sql.types.BinaryType.typeName

  • pyspark.sql.types.BooleanType.json

  • pyspark.sql.types.BooleanType.jsonValue

  • pyspark.sql.types.BooleanType.simpleString

  • pyspark.sql.types.BooleanType.typeName

  • pyspark.sql.types.ByteType.json

  • pyspark.sql.types.ByteType.jsonValue

  • pyspark.sql.types.ByteType.simpleString

  • pyspark.sql.types.ByteType.typeName

  • pyspark.sql.types.DecimalType.json

  • pyspark.sql.types.DecimalType.jsonValue

  • pyspark.sql.types.DecimalType.simpleString

  • pyspark.sql.types.DecimalType.typeName

  • pyspark.sql.types.DoubleType.json

  • pyspark.sql.types.DoubleType.jsonValue

  • pyspark.sql.types.DoubleType.simpleString

  • pyspark.sql.types.DoubleType.typeName

  • pyspark.sql.types.FloatType.json

  • pyspark.sql.types.FloatType.jsonValue

  • pyspark.sql.types.FloatType.simpleString

  • pyspark.sql.types.FloatType.typeName

  • pyspark.sql.types.FractionalType.json

  • pyspark.sql.types.FractionalType.jsonValue

  • pyspark.sql.types.FractionalType.simpleString

  • pyspark.sql.types.FractionalType.typeName

  • pyspark.sql.types.IntegerType.json

  • pyspark.sql.types.IntegerType.jsonValue

  • pyspark.sql.types.IntegerType.simpleString

  • pyspark.sql.types.IntegerType.typeName

  • pyspark.sql.types.IntegralType.json

  • pyspark.sql.types.IntegralType.jsonValue

  • pyspark.sql.types.IntegralType.simpleString

  • pyspark.sql.types.IntegralType.typeName

  • pyspark.sql.types.LongType.json

  • pyspark.sql.types.LongType.jsonValue

  • pyspark.sql.types.LongType.simpleString

  • pyspark.sql.types.LongType.typeName

  • pyspark.sql.types.MapType.json

  • pyspark.sql.types.MapType.jsonValue

  • pyspark.sql.types.MapType.simpleString

  • pyspark.sql.types.MapType.typeName

  • pyspark.sql.types.NullType.json

  • pyspark.sql.types.NullType.jsonValue

  • pyspark.sql.types.NullType.simpleString

  • pyspark.sql.types.NullType.typeName

  • pyspark.sql.types.NumericType.json

  • pyspark.sql.types.NumericType.jsonValue

  • pyspark.sql.types.NumericType.simpleString

  • pyspark.sql.types.NumericType.typeName

  • pyspark.sql.types.ShortType.json

  • pyspark.sql.types.ShortType.jsonValue

  • pyspark.sql.types.ShortType.simpleString

  • pyspark.sql.types.ShortType.typeName

  • pyspark.sql.types.StringType.json

  • pyspark.sql.types.StringType.jsonValue

  • pyspark.sql.types.StringType.simpleString

  • pyspark.sql.types.StringType.typeName

  • pyspark.sql.types.StructType.json

  • pyspark.sql.types.StructType.jsonValue

  • pyspark.sql.types.StructType.simpleString

  • pyspark.sql.types.StructType.typeName

  • pyspark.sql.types.TimestampType.json

  • pyspark.sql.types.TimestampType.jsonValue

  • pyspark.sql.types.TimestampType.simpleString

  • pyspark.sql.types.TimestampType.typeName

  • pyspark.sql.types.StructField.simpleString

  • pyspark.sql.types.StructField.typeName

  • pyspark.sql.types.StructField.json

  • pyspark.sql.types.StructField.jsonValue

  • pyspark.sql.types.DataType.json

  • pyspark.sql.types.DataType.jsonValue

  • pyspark.sql.types.DataType.simpleString

  • pyspark.sql.types.DataType.typeName

  • pyspark.sql.session.SparkSession.getActiveSession

  • pyspark.sql.session.SparkSession.version

  • pandas.io.html.read_html

  • pandas.io.json._normalize.json_normalize

  • pyspark.sql.types.ArrayType.fromJson

  • pyspark.sql.types.MapType.fromJson

  • pyspark.sql.types.StructField.fromJson

  • pyspark.sql.types.StructType.fromJson

  • pandas.core.groupby.generic.DataFrameGroupBy.pct_change

  • pandas.core.groupby.generic.SeriesGroupBy.pct_change

Updated the mapping status for the following Pandas elements, from NotSupported to Direct

  • pandas.io.html.read_html

  • pandas.io.json._normalize.json_normalize

  • pandas.core.groupby.generic.DataFrameGroupBy.pct_change

  • pandas.core.groupby.generic.SeriesGroupBy.pct_change

Updated the mapping status for the following PySpark elements, from Rename to Direct

  • pyspark.sql.functions.collect_list

  • pyspark.sql.functions.size

수정됨

  • 인벤토리의 버전 번호 형식이 표준화되었습니다.

버전 2.5.2(2025년 2월 5일)

핫픽스: 애플리케이션 & CLI 버전 2.5.2

데스크톱 앱

  • 샘플 프로젝트 옵션에서 변환할 때 발생하는 문제를 수정했습니다.

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 5.3.0

버전 2.5.1(2025년 2월 4일)

애플리케이션 & CLI 버전 2.5.1

데스크톱 앱

  • 사용자에게 쓰기 권한이 없는 경우 새로운 모달을 추가했습니다.

  • 라이선스 계약이 업데이트되었으므로 수락이 필요합니다.

CLI

  • CLI 화면에서 “–version” 또는 “-v”를 표시할 때 연도를 수정했습니다

포함된 SMA Core 버전 included-sma-core-versions

  • Snowpark Conversion Core 5.3.0

추가됨

Added the following Python Third-Party libraries with Direct status:

  • about-time

  • affinegap

  • aiohappyeyeballs

  • alibi-detect

  • alive-progress

  • allure-nose2

  • allure-robotframework

  • anaconda-cloud-cli

  • anaconda-mirror

  • astropy-iers-data

  • asynch

  • asyncssh

  • autots

  • autoviml

  • aws-msk-iam-sasl-signer-python

  • azure-functions

  • backports.tarfile

  • blas

  • bottle

  • bson

  • cairo

  • capnproto

  • captum

  • categorical-distance

  • census

  • clickhouse-driver

  • clustergram

  • cma

  • conda-anaconda-telemetry

  • configspace

  • cpp-expected

  • dask-expr

  • data-science-utils

  • databricks-sdk

  • datetime-distance

  • db-dtypes

  • dedupe

  • dedupe-variable-datetime

  • dedupe_lehvenshtein_search

  • dedupe_levenshtein_search

  • diff-cover

  • diptest

  • dmglib

  • docstring_parser

  • doublemetaphone

  • dspy-ai

  • econml

  • emcee

  • emoji

  • environs

  • eth-abi

  • eth-hash

  • eth-typing

  • eth-utils

  • expat

  • filetype

  • fitter

  • flask-cors

  • fpdf2

  • frozendict

  • gcab

  • geojson

  • gettext

  • glib-tools

  • google-ads

  • google-ai-generativelanguage

  • google-api-python-client

  • google-auth-httplib2

  • google-cloud-bigquery

  • google-cloud-bigquery-core

  • google-cloud-bigquery-storage

  • google-cloud-bigquery-storage-core

  • google-cloud-resource-manager

  • google-generativeai

  • googlemaps

  • grapheme

  • graphene

  • graphql-relay

  • gravis

  • greykite

  • grpc-google-iam-v1

  • harfbuzz

  • hatch-fancy-pypi-readme

  • haversine

  • hiclass

  • hicolor-icon-theme

  • highered

  • hmmlearn

  • holidays-ext

  • httplib2

  • icu

  • imbalanced-ensemble

  • immutabledict

  • importlib-metadata

  • importlib-resources

  • inquirerpy

  • iterative-telemetry

  • jaraco.context

  • jaraco.test

  • jiter

  • jiwer

  • joserfc

  • jsoncpp

  • jsonpath

  • jsonpath-ng

  • jsonpath-python

  • kagglehub

  • keplergl

  • kt-legacy

  • langchain-community

  • langchain-experimental

  • langchain-snowflake

  • langchain-text-splitters

  • libabseil

  • libflac

  • libgfortran-ng

  • libgfortran5

  • libglib

  • libgomp

  • libgrpc

  • libgsf

  • libmagic

  • libogg

  • libopenblas

  • libpostal

  • libprotobuf

  • libsentencepiece

  • libsndfile

  • libstdcxx-ng

  • libtheora

  • libtiff

  • libvorbis

  • libwebp

  • lightweight-mmm

  • litestar

  • litestar-with-annotated-types

  • litestar-with-attrs

  • litestar-with-cryptography

  • litestar-with-jinja

  • litestar-with-jwt

  • litestar-with-prometheus

  • litestar-with-structlog

  • lunarcalendar-ext

  • matplotlib-venn

  • metricks

  • mimesis

  • modin-ray

  • momepy

  • mpg123

  • msgspec

  • msgspec-toml

  • msgspec-yaml

  • msitools

  • multipart

  • namex

  • nbconvert-all

  • nbconvert-core

  • nbconvert-pandoc

  • nlohmann_json

  • numba-cuda

  • numpyro

  • office365-rest-python-client

  • openapi-pydantic

  • opentelemetry-distro

  • opentelemetry-instrumentation

  • opentelemetry-instrumentation-system-metrics

  • optree

  • osmnx

  • pathlib

  • pdf2image

  • pfzy

  • pgpy

  • plumbum

  • pm4py

  • polars

  • polyfactory

  • poppler-cpp

  • postal

  • pre-commit

  • prompt-toolkit

  • propcache

  • py-partiql-parser

  • py_stringmatching

  • pyatlan

  • pyfakefs

  • pyfhel

  • pyhacrf-datamade

  • pyiceberg

  • pykrb5

  • pylbfgs

  • pymilvus

  • pymoo

  • pynisher

  • pyomo

  • pypdf

  • pypdf-with-crypto

  • pypdf-with-full

  • pypdf-with-image

  • pypng

  • pyprind

  • pyrfr

  • pysoundfile

  • pytest-codspeed

  • pytest-trio

  • python-barcode

  • python-box

  • python-docx

  • python-gssapi

  • python-iso639

  • python-magic

  • python-pandoc

  • python-zstd

  • pyuca

  • pyvinecopulib

  • pyxirr

  • qrcode

  • rai-sdk

  • ray-client

  • ray-observability

  • readline

  • rich-click

  • rouge-score

  • ruff

  • scikit-criteria

  • scikit-mobility

  • sentencepiece-python

  • sentencepiece-spm

  • setuptools-markdown

  • setuptools-scm

  • setuptools-scm-git-archive

  • shareplum

  • simdjson

  • simplecosine

  • sis-extras

  • slack-sdk

  • smac

  • snowflake-sqlalchemy

  • snowflake_legacy

  • socrata-py

  • spdlog

  • sphinxcontrib-images

  • sphinxcontrib-jquery

  • sphinxcontrib-youtube

  • splunk-opentelemetry

  • sqlfluff

  • squarify

  • st-theme

  • statistics

  • streamlit-antd-components

  • streamlit-condition-tree

  • streamlit-echarts

  • streamlit-feedback

  • streamlit-keplergl

  • streamlit-mermaid

  • streamlit-navigation-bar

  • streamlit-option-menu

  • strictyaml

  • stringdist

  • sybil

  • tensorflow-cpu

  • tensorflow-text

  • tiledb-ptorchaudio

  • torcheval

  • trio-websocket

  • trulens-connectors-snowflake

  • trulens-core

  • trulens-dashboard

  • trulens-feedback

  • trulens-otel-semconv

  • trulens-providers-cortex

  • tsdownsample

  • typing

  • typing-extensions

  • typing_extensions

  • unittest-xml-reporting

  • uritemplate

  • us

  • uuid6

  • wfdb

  • wsproto

  • zlib

  • zope.index

Added the following Python BuiltIn libraries with Direct status:

  • aifc

  • array

  • ast

  • asynchat

  • asyncio

  • asyncore

  • atexit

  • audioop

  • base64

  • bdb

  • binascii

  • bitsect

  • builtins

  • bz2

  • calendar

  • cgi

  • cgitb

  • chunk

  • cmath

  • cmd

  • code

  • codecs

  • codeop

  • colorsys

  • compileall

  • concurrent

  • contextlib

  • contextvars

  • copy

  • copyreg

  • cprofile

  • crypt

  • csv

  • ctypes

  • curses

  • dbm

  • difflib

  • dis

  • distutils

  • doctest

  • email

  • ensurepip

  • enum

  • errno

  • faulthandler

  • fcntl

  • filecmp

  • fileinput

  • fnmatch

  • fractions

  • ftplib

  • functools

  • gc

  • getopt

  • getpass

  • gettext

  • graphlib

  • grp

  • gzip

  • hashlib

  • heapq

  • hmac

  • html

  • http

  • idlelib

  • imaplib

  • imghdr

  • imp

  • importlib

  • inspect

  • ipaddress

  • itertools

  • keyword

  • linecache

  • locale

  • lzma

  • mailbox

  • mailcap

  • marshal

  • math

  • mimetypes

  • mmap

  • modulefinder

  • msilib

  • multiprocessing

  • netrc

  • nis

  • nntplib

  • numbers

  • operator

  • optparse

  • ossaudiodev

  • pdb

  • pickle

  • pickletools

  • pipes

  • pkgutil

  • platform

  • plistlib

  • poplib

  • posix

  • pprint

  • profile

  • pstats

  • pty

  • pwd

  • py_compile

  • pyclbr

  • pydoc

  • queue

  • quopri

  • random

  • re

  • reprlib

  • resource

  • rlcompleter

  • runpy

  • sched

  • secrets

  • select

  • selectors

  • shelve

  • shlex

  • signal

  • site

  • sitecustomize

  • smtpd

  • smtplib

  • sndhdr

  • socket

  • socketserver

  • spwd

  • sqlite3

  • ssl

  • stat

  • string

  • stringprep

  • struct

  • subprocess

  • sunau

  • symtable

  • sysconfig

  • syslog

  • tabnanny

  • tarfile

  • telnetlib

  • tempfile

  • termios

  • test

  • textwrap

  • threading

  • timeit

  • tkinter

  • token

  • tokenize

  • tomllib

  • trace

  • traceback

  • tracemalloc

  • tty

  • turtle

  • turtledemo

  • types

  • unicodedata

  • urllib

  • uu

  • uuid

  • venv

  • warnings

  • wave

  • weakref

  • webbrowser

  • wsgiref

  • xdrlib

  • xml

  • xmlrpc

  • zipapp

  • zipfile

  • zipimport

  • zoneinfo

Added the following Python BuiltIn libraries with NotSupported status:

  • msvcrt

  • winreg

  • winsound

변경됨

  • .NET 버전을 v9.0.0으로 업데이트합니다.

  • EWI SPRKPY1068 이 개선되었습니다.

  • SMA 에서 지원하는 Snowpark Python API 버전을 1.24.0에서 1.25.0으로 상향 조정했습니다.

  • 상세 보고서 템플릿을 업데이트하여 이제 Pandas용 Snowpark 버전이 추가되었습니다.

  • 다음 라이브러리를 ThirdPartyLib 에서 BuiltIn 으로 변경했습니다.

    • configparser

    • dataclasses

    • pathlib

    • readline

    • statistics

    • zlib

Updated the mapping status for the following Pandas elements, from Direct to Partial:

  • pandas.core.frame.DataFrame.add

  • pandas.core.frame.DataFrame.aggregate

  • pandas.core.frame.DataFrame.all

  • pandas.core.frame.DataFrame.apply

  • pandas.core.frame.DataFrame.astype

  • pandas.core.frame.DataFrame.cumsum

  • pandas.core.frame.DataFrame.div

  • pandas.core.frame.DataFrame.dropna

  • pandas.core.frame.DataFrame.eq

  • pandas.core.frame.DataFrame.ffill

  • pandas.core.frame.DataFrame.fillna

  • pandas.core.frame.DataFrame.floordiv

  • pandas.core.frame.DataFrame.ge

  • pandas.core.frame.DataFrame.groupby

  • pandas.core.frame.DataFrame.gt

  • pandas.core.frame.DataFrame.idxmax

  • pandas.core.frame.DataFrame.idxmin

  • pandas.core.frame.DataFrame.inf

  • pandas.core.frame.DataFrame.join

  • pandas.core.frame.DataFrame.le

  • pandas.core.frame.DataFrame.loc

  • pandas.core.frame.DataFrame.lt

  • pandas.core.frame.DataFrame.mask

  • pandas.core.frame.DataFrame.merge

  • pandas.core.frame.DataFrame.mod

  • pandas.core.frame.DataFrame.mul

  • pandas.core.frame.DataFrame.ne

  • pandas.core.frame.DataFrame.nunique

  • pandas.core.frame.DataFrame.pivot_table

  • pandas.core.frame.DataFrame.pow

  • pandas.core.frame.DataFrame.radd

  • pandas.core.frame.DataFrame.rank

  • pandas.core.frame.DataFrame.rdiv

  • pandas.core.frame.DataFrame.rename

  • pandas.core.frame.DataFrame.replace

  • pandas.core.frame.DataFrame.resample

  • pandas.core.frame.DataFrame.rfloordiv

  • pandas.core.frame.DataFrame.rmod

  • pandas.core.frame.DataFrame.rmul

  • pandas.core.frame.DataFrame.rolling

  • pandas.core.frame.DataFrame.round

  • pandas.core.frame.DataFrame.rpow

  • pandas.core.frame.DataFrame.rsub

  • pandas.core.frame.DataFrame.rtruediv

  • pandas.core.frame.DataFrame.shift

  • pandas.core.frame.DataFrame.skew

  • pandas.core.frame.DataFrame.sort_index

  • pandas.core.frame.DataFrame.sort_values

  • pandas.core.frame.DataFrame.sub

  • pandas.core.frame.DataFrame.to_dict

  • pandas.core.frame.DataFrame.transform

  • pandas.core.frame.DataFrame.transpose

  • pandas.core.frame.DataFrame.truediv

  • pandas.core.frame.DataFrame.var

  • pandas.core.indexes.datetimes.date_range

  • pandas.core.reshape.concat.concat

  • pandas.core.reshape.melt.melt

  • pandas.core.reshape.merge.merge

  • pandas.core.reshape.pivot.pivot_table

  • pandas.core.reshape.tile.cut

  • pandas.core.series.Series.add

  • pandas.core.series.Series.aggregate

  • pandas.core.series.Series.all

  • pandas.core.series.Series.any

  • pandas.core.series.Series.cumsum

  • pandas.core.series.Series.div

  • pandas.core.series.Series.dropna

  • pandas.core.series.Series.eq

  • pandas.core.series.Series.ffill

  • pandas.core.series.Series.fillna

  • pandas.core.series.Series.floordiv

  • pandas.core.series.Series.ge

  • pandas.core.series.Series.gt

  • pandas.core.series.Series.lt

  • pandas.core.series.Series.mask

  • pandas.core.series.Series.mod

  • pandas.core.series.Series.mul

  • pandas.core.series.Series.multiply

  • pandas.core.series.Series.ne

  • pandas.core.series.Series.pow

  • pandas.core.series.Series.quantile

  • pandas.core.series.Series.radd

  • pandas.core.series.Series.rank

  • pandas.core.series.Series.rdiv

  • pandas.core.series.Series.rename

  • pandas.core.series.Series.replace

  • pandas.core.series.Series.resample

  • pandas.core.series.Series.rfloordiv

  • pandas.core.series.Series.rmod

  • pandas.core.series.Series.rmul

  • pandas.core.series.Series.rolling

  • pandas.core.series.Series.rpow

  • pandas.core.series.Series.rsub

  • pandas.core.series.Series.rtruediv

  • pandas.core.series.Series.sample

  • pandas.core.series.Series.shift

  • pandas.core.series.Series.skew

  • pandas.core.series.Series.sort_index

  • pandas.core.series.Series.sort_values

  • pandas.core.series.Series.std

  • pandas.core.series.Series.sub

  • pandas.core.series.Series.subtract

  • pandas.core.series.Series.truediv

  • pandas.core.series.Series.value_counts

  • pandas.core.series.Series.var

  • pandas.core.series.Series.where

  • pandas.core.tools.numeric.to_numeric

Updated the mapping status for the following Pandas elements, from NotSupported to Direct:

  • pandas.core.frame.DataFrame.attrs

  • pandas.core.indexes.base.Index.to_numpy

  • pandas.core.series.Series.str.len

  • pandas.io.html.read_html

  • pandas.io.xml.read_xml

  • pandas.core.indexes.datetimes.DatetimeIndex.mean

  • pandas.core.resample.Resampler.indices

  • pandas.core.resample.Resampler.nunique

  • pandas.core.series.Series.items

  • pandas.core.tools.datetimes.to_datetime

  • pandas.io.sas.sasreader.read_sas

  • pandas.core.frame.DataFrame.attrs

  • pandas.core.frame.DataFrame.style

  • pandas.core.frame.DataFrame.items

  • pandas.core.groupby.generic.DataFrameGroupBy.head

  • pandas.core.groupby.generic.DataFrameGroupBy.median

  • pandas.core.groupby.generic.DataFrameGroupBy.min

  • pandas.core.groupby.generic.DataFrameGroupBy.nunique

  • pandas.core.groupby.generic.DataFrameGroupBy.tail

  • pandas.core.indexes.base.Index.is_boolean

  • pandas.core.indexes.base.Index.is_floating

  • pandas.core.indexes.base.Index.is_integer

  • pandas.core.indexes.base.Index.is_monotonic_decreasing

  • pandas.core.indexes.base.Index.is_monotonic_increasing

  • pandas.core.indexes.base.Index.is_numeric

  • pandas.core.indexes.base.Index.is_object

  • pandas.core.indexes.base.Index.max

  • pandas.core.indexes.base.Index.min

  • pandas.core.indexes.base.Index.name

  • pandas.core.indexes.base.Index.names

  • pandas.core.indexes.base.Index.rename

  • pandas.core.indexes.base.Index.set_names

  • pandas.core.indexes.datetimes.DatetimeIndex.day_name

  • pandas.core.indexes.datetimes.DatetimeIndex.month_name

  • pandas.core.indexes.datetimes.DatetimeIndex.time

  • pandas.core.indexes.timedeltas.TimedeltaIndex.ceil

  • pandas.core.indexes.timedeltas.TimedeltaIndex.days

  • pandas.core.indexes.timedeltas.TimedeltaIndex.floor

  • pandas.core.indexes.timedeltas.TimedeltaIndex.microseconds

  • pandas.core.indexes.timedeltas.TimedeltaIndex.nanoseconds

  • pandas.core.indexes.timedeltas.TimedeltaIndex.round

  • pandas.core.indexes.timedeltas.TimedeltaIndex.seconds

  • pandas.core.reshape.pivot.crosstab

  • pandas.core.series.Series.dt.round

  • pandas.core.series.Series.dt.time

  • pandas.core.series.Series.dt.weekday

  • pandas.core.series.Series.is_monotonic_decreasing

  • pandas.core.series.Series.is_monotonic_increasing

Updated the mapping status for the following Pandas elements, from NotSupported to Partial:

  • pandas.core.frame.DataFrame.align

  • pandas.core.series.Series.align

  • pandas.core.frame.DataFrame.tz_convert

  • pandas.core.frame.DataFrame.tz_localize

  • pandas.core.groupby.generic.DataFrameGroupBy.fillna

  • pandas.core.groupby.generic.SeriesGroupBy.fillna

  • pandas.core.indexes.datetimes.bdate_range

  • pandas.core.indexes.datetimes.DatetimeIndex.std

  • pandas.core.indexes.timedeltas.TimedeltaIndex.mean

  • pandas.core.resample.Resampler.asfreq

  • pandas.core.resample.Resampler.quantile

  • pandas.core.series.Series.map

  • pandas.core.series.Series.tz_convert

  • pandas.core.series.Series.tz_localize

  • pandas.core.window.expanding.Expanding.count

  • pandas.core.window.rolling.Rolling.count

  • pandas.core.groupby.generic.DataFrameGroupBy.aggregate

  • pandas.core.groupby.generic.SeriesGroupBy.aggregate

  • pandas.core.frame.DataFrame.applymap

  • pandas.core.series.Series.apply

  • pandas.core.groupby.generic.DataFrameGroupBy.bfill

  • pandas.core.groupby.generic.DataFrameGroupBy.ffill

  • pandas.core.groupby.generic.SeriesGroupBy.bfill

  • pandas.core.groupby.generic.SeriesGroupBy.ffill

  • pandas.core.frame.DataFrame.backfill

  • pandas.core.frame.DataFrame.bfill

  • pandas.core.frame.DataFrame.compare

  • pandas.core.frame.DataFrame.unstack

  • pandas.core.frame.DataFrame.asfreq

  • pandas.core.series.Series.backfill

  • pandas.core.series.Series.bfill

  • pandas.core.series.Series.compare

  • pandas.core.series.Series.unstack

  • pandas.core.series.Series.asfreq

  • pandas.core.series.Series.argmax

  • pandas.core.series.Series.argmin

  • pandas.core.indexes.accessors.CombinedDatetimelikeProperties.microsecond

  • pandas.core.indexes.accessors.CombinedDatetimelikeProperties.nanosecond

  • pandas.core.indexes.accessors.CombinedDatetimelikeProperties.day_name

  • pandas.core.indexes.accessors.CombinedDatetimelikeProperties.month_name

  • pandas.core.indexes.accessors.CombinedDatetimelikeProperties.month_start

  • pandas.core.indexes.accessors.CombinedDatetimelikeProperties.month_end

  • pandas.core.indexes.accessors.CombinedDatetimelikeProperties.is_year_start

  • pandas.core.indexes.accessors.CombinedDatetimelikeProperties.is_year_end

  • pandas.core.indexes.accessors.CombinedDatetimelikeProperties.is_quarter_start

  • pandas.core.indexes.accessors.CombinedDatetimelikeProperties.is_quarter_end

  • pandas.core.indexes.accessors.CombinedDatetimelikeProperties.is_leap_year

  • pandas.core.indexes.accessors.CombinedDatetimelikeProperties.floor

  • pandas.core.indexes.accessors.CombinedDatetimelikeProperties.ceil

  • pandas.core.groupby.generic.DataFrameGroupBy.idxmax

  • pandas.core.groupby.generic.DataFrameGroupBy.idxmin

  • pandas.core.groupby.generic.DataFrameGroupBy.std

  • pandas.core.indexes.timedeltas.TimedeltaIndex.mean

  • pandas.core.tools.timedeltas.to_timedelta

알려진 문제

  • 이 버전에는 샘플 프로젝트를 변환 할 때이 버전에서 작동하지 않는 문제가 포함되어 있으며 다음 릴리스에서 수정 될 예정입니다

버전 2.4.3(2025년 1월 9일)

애플리케이션 & CLI 버전 2.4.3

데스크톱 앱

  • 충돌 보고서 모달에 문제 해결 가이드 링크를 추가했습니다.

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 4.15.0

추가됨

  • Added the following PySpark elements to ConversionStatusPySpark.csv file as NotSupported:

    • pyspark.sql.streaming.readwriter.DataStreamReader.table

    • pyspark.sql.streaming.readwriter.DataStreamReader.schema

    • pyspark.sql.streaming.readwriter.DataStreamReader.options

    • pyspark.sql.streaming.readwriter.DataStreamReader.option

    • pyspark.sql.streaming.readwriter.DataStreamReader.load

    • pyspark.sql.streaming.readwriter.DataStreamReader.format

    • pyspark.sql.streaming.query.StreamingQuery.awaitTermination

    • pyspark.sql.streaming.readwriter.DataStreamWriter.partitionBy

    • pyspark.sql.streaming.readwriter.DataStreamWriter.toTable

    • pyspark.sql.streaming.readwriter.DataStreamWriter.trigger

    • pyspark.sql.streaming.readwriter.DataStreamWriter.queryName

    • pyspark.sql.streaming.readwriter.DataStreamWriter.outputMode

    • pyspark.sql.streaming.readwriter.DataStreamWriter.format

    • pyspark.sql.streaming.readwriter.DataStreamWriter.option

    • pyspark.sql.streaming.readwriter.DataStreamWriter.foreachBatch

    • pyspark.sql.streaming.readwriter.DataStreamWriter.start

변경됨

  • Hive SQL EWIs 형식이 업데이트되었습니다.

    • SPRKHVSQL1001

    • SPRKHVSQL1002

    • SPRKHVSQL1003

    • SPRKHVSQL1004

    • SPRKHVSQL1005

    • SPRKHVSQL1006

  • Spark SQL EWIs 형식이 업데이트되었습니다.

    • SPRKSPSQL1001

    • SPRKSPSQL1002

    • SPRKSPSQL1003

    • SPRKSPSQL1004

    • SPRKSPSQL1005

    • SPRKSPSQL1006

수정됨

  • 일부 PySpark 요소가 도구에서 식별되지 않던 버그를 수정했습니다.

  • ThirdParty 식별자 호출과 ThirdParty 가져오기 호출 번호의 불일치를 수정했습니다.

버전 2.4.2(2024년 12월 13일)

애플리케이션 및 CLI 버전 2.4.2

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 4.14.0

추가됨 added

  • ConversionStatusPySpark.csv에 다음 Spark 요소를 추가했습니다.

    • pyspark.broadcast.Broadcast.value

    • pyspark.conf.SparkConf.getAll

    • pyspark.conf.SparkConf.setAll

    • pyspark.conf.SparkConf.setMaster

    • pyspark.context.SparkContext.addFile

    • pyspark.context.SparkContext.addPyFile

    • pyspark.context.SparkContext.binaryFiles

    • pyspark.context.SparkContext.setSystemProperty

    • pyspark.context.SparkContext.version

    • pyspark.files.SparkFiles

    • pyspark.files.SparkFiles.get

    • pyspark.rdd.RDD.count

    • pyspark.rdd.RDD.distinct

    • pyspark.rdd.RDD.reduceByKey

    • pyspark.rdd.RDD.saveAsTextFile

    • pyspark.rdd.RDD.take

    • pyspark.rdd.RDD.zipWithIndex

    • pyspark.sql.context.SQLContext.udf

    • pyspark.sql.types.StructType.simpleString

변경됨

  • Updated the documentation of the Pandas EWIs, PNDSPY1001, PNDSPY1002 and PNDSPY1003 SPRKSCL1137 to align with a standardized format, ensuring consistency and clarity across all the EWIs.

  • Updated the documentation of the following Scala EWIs: SPRKSCL1106 and SPRKSCL1107. To be aligned with a standardized format, ensuring consistency and clarity across all the EWIs.

수정됨

  • 서드 파티 사용량 인벤토리에 UserDefined 기호가 표시되던 버그를 수정했습니다.

버전 2.4.1(2024년 12월 4일)

애플리케이션 및 CLI 버전 2.4.1

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 4.13.1

명령줄 인터페이스

변경됨

  • 출력 폴더에 타임스탬프를 추가했습니다.

Snowpark Conversion Core 4.13.1

추가됨

  • 라이브러리 매핑 테이블에 ‘소스 언어’ 열을 추가했습니다

  • Added Others as a new category in the Pandas API Summary table of the DetailedReport.docx

변경됨

  • Updated the documentation for Python EWI SPRKPY1058.

  • Updated the message for the pandas EWI PNDSPY1002 to show the relate pandas element.

  • .csv 보고서 생성 방식을 업데이트하여 이제 두 번째 실행 후 덮어씁니다.

수정됨

  • 출력 시 노트북 파일이 생성되지 않던 버그를 수정했습니다.

  • Fixed the replacer for get and set methods from pyspark.sql.conf.RuntimeConfig, the replacer now match the correct full names.

  • 쿼리 태그가 잘못된 버전을 수정했습니다.

  • UserDefined 패키지를 ThirdPartyLib 로 수정했습니다.

버전 2.3.1(2024년 11월 14일)

애플리케이션 및 CLI 버전 2.3.1

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 4.12.0

데스크톱 앱

수정됨

  • SQL 옵션에서 대/소문자를 구분하는 문제를 수정했습니다.

제거됨

  • 쇼-액션 메시지에서 플랫폼 이름을 제거합니다.

Snowpark Conversion Core 4.12.0

추가됨

  • Snowpark Python 1.23.0 및 1.24.0에 대한 지원이 추가되었습니다.

  • Added a new EWI for the pyspark.sql.dataframe.DataFrame.writeTo function. All the usages of this function will now have the EWI SPRKPY1087.

변경됨

  • Updated the documentation of the Scala EWIs from SPRKSCL1137 to SPRKSCL1156 to align with a standardized format, ensuring consistency and clarity across all the EWIs.

  • Updated the documentation of the Scala EWIs from SPRKSCL1117 to SPRKSCL1136 to align with a standardized format, ensuring consistency and clarity across all the EWIs.

  • 다음 EWIs 에 표시되는 메시지를 업데이트했습니다.

    • SPRKPY1082

    • SPRKPY1083

  • Updated the documentation of the Scala EWIs from SPRKSCL1100 to SPRKSCL1105, from SPRKSCL1108 to SPRKSCL1116; from SPRKSCL1157 to SPRKSCL1175; to align with a standardized format, ensuring consistency and clarity across all the EWIs.

  • 다음 PySpark 요소의 매핑 상태를 NotSupported 에서 EWI 를 사용한 Direct 로 업데이트했습니다.

    • pyspark.sql.readwriter.DataFrameWriter.option => snowflake.snowpark.DataFrameWriter.option: All the usages of this function now have the EWI SPRKPY1088

    • pyspark.sql.readwriter.DataFrameWriter.options => snowflake.snowpark.DataFrameWriter.options: All the usages of this function now have the EWI SPRKPY1089

  • 다음 PySpark 요소의 매핑 상태를 해결 방법 에서 이름 변경 으로 업데이트했습니다.

    • pyspark.sql.readwriter.DataFrameWriter.partitionBy => snowflake.snowpark.DataFrameWriter.partition_by

  • EWI 설명서 업데이트: SPRKSCL1000, SPRKSCL1001, SPRKSCL1002, SPRKSCL1100, SPRKSCL1101, SPRKSCL1102, SPRKSCL1103, SPRKSCL1104, SPRKSCL1105.

제거됨

  • Removed the pyspark.sql.dataframe.DataFrameStatFunctions.writeTo element from the conversion status, this element does not exist.

사용되지 않음

  • 다음 EWI 코드가 사용 중단되었습니다.

    • SPRKPY1081

    • SPRKPY1084

버전 2.3.0(2024년 10월 30일)

애플리케이션 & CLI 버전 2.3.0

  • Snowpark Conversion Core 4.11.0

Snowpark Conversion Core 4.11.0

추가됨

  • Added a new column called Url to the Issues.csv file, which redirects to the corresponding EWI documentation.

  • 다음 Spark 요소에 대해 EWIs 를 새로 추가했습니다.

    • [SPRKPY1082] pyspark.sql.readwriter.DataFrameReader.load

    • [SPRKPY1083] pyspark.sql.readwriter.DataFrameWriter.save

    • [SPRKPY1084] pyspark.sql.readwriter.DataFrameWriter.option

    • [SPRKPY1085] pyspark.ml.feature.VectorAssembler

    • [SPRKPY1086] pyspark.ml.linalg.VectorUDT

  • 38개의 새로운 Pandas 요소가 추가되었습니다.

    • pandas.core.frame.DataFrame.select

    • andas.core.frame.DataFrame.str

    • pandas.core.frame.DataFrame.str.replace

    • pandas.core.frame.DataFrame.str.upper

    • pandas.core.frame.DataFrame.to_list

    • pandas.core.frame.DataFrame.tolist

    • pandas.core.frame.DataFrame.unique

    • pandas.core.frame.DataFrame.values.tolist

    • pandas.core.frame.DataFrame.withColumn

    • pandas.core.groupby.generic._SeriesGroupByScalar

    • pandas.core.groupby.generic._SeriesGroupByScalar[S1].agg

    • pandas.core.groupby.generic._SeriesGroupByScalar[S1].aggregate

    • pandas.core.indexes.datetimes.DatetimeIndex.year

    • pandas.core.series.Series.columns

    • pandas.core.tools.datetimes.to_datetime.date

    • pandas.core.tools.datetimes.to_datetime.dt.strftime

    • pandas.core.tools.datetimes.to_datetime.strftime

    • pandas.io.parsers.readers.TextFileReader.apply

    • pandas.io.parsers.readers.TextFileReader.astype

    • pandas.io.parsers.readers.TextFileReader.columns

    • pandas.io.parsers.readers.TextFileReader.copy

    • pandas.io.parsers.readers.TextFileReader.drop

    • pandas.io.parsers.readers.TextFileReader.drop_duplicates

    • pandas.io.parsers.readers.TextFileReader.fillna

    • pandas.io.parsers.readers.TextFileReader.groupby

    • pandas.io.parsers.readers.TextFileReader.head

    • pandas.io.parsers.readers.TextFileReader.iloc

    • pandas.io.parsers.readers.TextFileReader.isin

    • pandas.io.parsers.readers.TextFileReader.iterrows

    • pandas.io.parsers.readers.TextFileReader.loc

    • pandas.io.parsers.readers.TextFileReader.merge

    • pandas.io.parsers.readers.TextFileReader.rename

    • pandas.io.parsers.readers.TextFileReader.shape

    • pandas.io.parsers.readers.TextFileReader.to_csv

    • pandas.io.parsers.readers.TextFileReader.to_excel

    • pandas.io.parsers.readers.TextFileReader.unique

    • pandas.io.parsers.readers.TextFileReader.values

    • pandas.tseries.offsets

버전 2.2.3(2024년 10월 24일)

애플리케이션 버전 2.2.3

포함된 SMA 핵심 버전

  • Snowpark Conversion Core 4.10.0

데스크톱 앱

수정됨

  • Windows 버전의 메뉴 모음에서 SMA 가 Snowpark Migration Accelerator 대신 SnowConvert 레이블로 표시되는 버그를 수정했습니다.

  • Fixed a bug that caused the SMA to crash when it did not have read and write permissions to the .config directory in macOS and the AppData directory in Windows.

명령줄 인터페이스

변경됨

  • Renamed the CLI executable name from snowct to sma.

  • 소스 언어 인자를 제거하여 더 이상 Python 평가/변환을 실행 중인지 또는 Scala 평가/변환을 실행 중인지 지정할 필요가 없습니다.

  • 다음과 같은 새로운 인자를 추가하여 CLI 에서 지원하는 명령줄 인자를 확장했습니다.

    • --enableJupyter | -j: Flag to indicate if the conversion of Databricks notebooks to Jupyter is enabled or not.

    • --sql | -f: Database engine syntax to be used when a SQL command is detected.

    • --customerEmail | -e: Configure the customer email.

    • --customerCompany | -c: Configure the customer company.

    • --projectName | -p: Configure the customer project.

  • 애플리케이션의 정확한 이름을 반영하여 모든 메시지의 일관성과 명확성을 보장하기 위해 일부 텍스트를 업데이트했습니다.

  • 애플리케이션의 이용 약관을 업데이트했습니다.

  • 최신 기능, 개선 사항 및 변경 사항을 반영하기 위해 CLI 설명서를 업데이트 및 확장했습니다.

  • SMA 실행을 진행하기 전에 표시되는 텍스트를 업데이트하여 개선했습니다

  • 사용자 확인 메시지를 표시할 때 “Yes” 를 유효한 인자로 허용하도록 CLI 를 업데이트했습니다.

  • Allowed the CLI to continue the execution without waiting for user interaction by specifying the argument -y or --yes.

  • Updated the help information of the --sql argument to show the values that this argument expects.

Snowpark Conversion Core Version 4.10.0

추가됨

  • Added a new EWI for the pyspark.sql.readwriter.DataFrameWriter.partitionBy function. All the usages of this function will now have the EWI SPRKPY1081.

  • Added a new column called Technology to the ImportUsagesInventory.csv file.

변경됨

  • Updated the Third-Party Libraries readiness score to also take into account the Unknown libraries.

  • Updated the AssessmentFiles.zip file to include .json files instead of .pam files.

  • CSV 에서 JSON 으로의 변환 메커니즘을 개선하여 인벤토리 처리 성능을 향상시켰습니다.

  • 다음 EWIs 의 설명서를 개선했습니다.

    • SPRKPY1029

    • SPRKPY1054

    • SPRKPY1055

    • SPRKPY1063

    • SPRKPY1075

    • SPRKPY1076

  • Updated the mapping status of the following Spark Scala elements from Direct to Rename.

    • org.apache.spark.sql.functions.shiftLeft => com.snowflake.snowpark.functions.shiftleft

    • org.apache.spark.sql.functions.shiftRight => com.snowflake.snowpark.functions.shiftright

  • Updated the mapping status of the following Spark Scala elements from Not Supported to Direct.

    • org.apache.spark.sql.functions.shiftleft => com.snowflake.snowpark.functions.shiftleft

    • org.apache.spark.sql.functions.shiftright => com.snowflake.snowpark.functions.shiftright

수정됨

  • Fixed a bug that caused the SMA to incorrectly populate the Origin column of the ImportUsagesInventory.csv file.

  • Fixed a bug that caused the SMA to not classify imports of the libraries io, json, logging and unittest as Python built-in imports in the ImportUsagesInventory.csv file and in the DetailedReport.docx file.

버전 2.2.2(2024년 10월 11일)

애플리케이션 버전 2.2.2

업데이트된 기능은 다음과 같습니다.

  • Snowpark Conversion Core 4.8.0

Snowpark Conversion Core Version 4.8.0

추가됨

  • Added EwiCatalog.csv and .md files to reorganize documentation

  • Added the mapping status of pyspark.sql.functions.ln Direct.

  • Added a transformation for pyspark.context.SparkContext.getOrCreate

    • Check the EWI SPRKPY1080 for further details.

  • 함수의 매개 변수에 대한 SymbolTable, 유추 유형에 대한 개선 사항이 추가되었습니다.

  • SymbolTable 이 정적 메서드를 지원하며, 첫 번째 매개 변수가 self라고 가정하지 않습니다.

  • 누락된 EWIs 에 대한 설명서가 추가되었습니다.

    • SPRKHVSQL1005

    • SPRKHVSQL1006

    • SPRKSPSQL1005

    • SPRKSPSQL1006

    • SPRKSCL1002

    • SPRKSCL1170

    • SPRKSCL1171

    • SPRKPY1057

    • SPRKPY1058

    • SPRKPY1059

    • SPRKPY1060

    • SPRKPY1061

    • SPRKPY1064

    • SPRKPY1065

    • SPRKPY1066

    • SPRKPY1067

    • SPRKPY1069

    • SPRKPY1070

    • SPRKPY1077

    • SPRKPY1078

    • SPRKPY1079

    • SPRKPY1101

변경됨

  • 다음의 매핑 상태를 업데이트했습니다.

    • pyspark.sql.functions.array_remove from NotSupported to Direct.

수정됨

  • 세부 보고서의 코드 파일 크기 조정 테이블을 수정하여 .sql 및 .hql 파일을 제외하도록 하고 테이블에 Extra Large 행을 추가했습니다.

  • Fixed missing the update_query_tag when SparkSession is defined into multiple lines on Python.

  • Fixed missing the update_query_tag when SparkSession is defined into multiple lines on Scala.

  • Fixed missing EWI SPRKHVSQL1001 to some SQL statements with parsing errors.

  • 문자열 리터럴 안에 새 라인 값 유지 수정

  • 파일 유형 요약 테이블에 표시되는 총 코드 줄 수를 수정했습니다

  • 파일을 성공적으로 인식하면 구문 분석 점수가 0으로 표시되는 문제 수정

  • Databricks Magic SQL Cells의 인벤토리에서 LOC 카운트를 수정했습니다.

버전 2.2.0(2024년 9월 26일)

애플리케이션 버전 2.2.0

업데이트된 기능은 다음과 같습니다.

  • Snowpark Conversion Core 4.6.0

Snowpark Conversion Core Version 4.6.0

추가됨

  • Add transformation for pyspark.sql.readwriter.DataFrameReader.parquet.

  • Add transformation for pyspark.sql.readwriter.DataFrameReader.option when it is a Parquet method.

변경됨

  • 다음의 매핑 상태를 업데이트했습니다.

    • pyspark.sql.types.StructType.fields from NotSupported to Direct.

    • pyspark.sql.types.StructType.names from NotSupported to Direct.

    • pyspark.context.SparkContext.setLogLevel from Workaround to Transformation. - More detail can be found in EWIs SPRKPY1078 and SPRKPY1079

    • org.apache.spark.sql.functions.round from WorkAround to Direct.

    • org.apache.spark.sql.functions.udf from NotDefined to Transformation. - More detail can be found in EWIs SPRKSCL1174 and SPRKSCL1175

  • Updated the mapping status of the following Spark elements from DirectHelper to Direct:

    • org.apache.spark.sql.functions.hex

    • org.apache.spark.sql.functions.unhex

    • org.apache.spark.sql.functions.shiftleft

    • org.apache.spark.sql.functions.shiftright

    • org.apache.spark.sql.functions.reverse

    • org.apache.spark.sql.functions.isnull

    • org.apache.spark.sql.functions.unix_timestamp

    • org.apache.spark.sql.functions.randn

    • org.apache.spark.sql.functions.signum

    • org.apache.spark.sql.functions.sign

    • org.apache.spark.sql.functions.collect_list

    • org.apache.spark.sql.functions.log10

    • org.apache.spark.sql.functions.log1p

    • org.apache.spark.sql.functions.base64

    • org.apache.spark.sql.functions.unbase64

    • org.apache.spark.sql.functions.regexp_extract

    • org.apache.spark.sql.functions.expr

    • org.apache.spark.sql.functions.date_format

    • org.apache.spark.sql.functions.desc

    • org.apache.spark.sql.functions.asc

    • org.apache.spark.sql.functions.size

    • org.apache.spark.sql.functions.locate

    • org.apache.spark.sql.functions.ntile

수정됨

  • 전체 Pandas API의 백분율에 표시되는 수정된 값

  • DetailReport 에서 ImportCalls 테이블의 총 퍼센트를 수정했습니다

사용되지 않음

  • 다음 EWI 코드가 사용 중단되었습니다.

    • SPRKSCL1115

버전 2.1.7(2024년 9월 12일)

애플리케이션 버전 2.1.7

업데이트된 기능은 다음과 같습니다.

  • Snowpark Conversion Core 4.5.7

  • Snowpark Conversion Core 4.5.2

Snowpark Conversion Core Version 4.5.7

핫픽스됨

  • 사용량이 없을 때 Spark 사용량 요약에 총 행이 추가되는 문제 수정

  • Bumped of Python Assembly to Version=:code:1.3.111

    • 여러 라인 인자의 후행 쉼표 구문 분석

Snowpark Conversion Core Version 4.5.2

추가됨

  • Added transformation for pyspark.sql.readwriter.DataFrameReader.option:

    • 체인이 CSV 메서드 호출에서 비롯된 경우.

    • 체인이 JSON 메서드 호출에서 비롯된 경우.

  • Added transformation for pyspark.sql.readwriter.DataFrameReader.json.

변경됨

  • Python/Scala 함수로 전달된 SQL 문자열에서 SMA 실행

    • Scala/Python에서 임시 SQL 단위를 출력하기 위해 AST 생성

    • SqlEmbeddedUsages.csv 인벤토리 생성

    • SqlStatementsInventroy.csv 및 SqlExtractionInventory.csv 사용 중단

    • SQL 리터럴을 처리 할 수없는 경우 EWI 통합

    • SQL 임베드된 코드를 처리할 새 작업 생성하기

    • Python에서 SqlEmbeddedUsages.csv 인벤토리에 대한 정보 수집

    • Python에서 SQL 변환된 코드를 리터럴로 바꿉니다

    • 구현 후 테스트 케이스 업데이트

    • Create Table, SqlEmbeddedUsages 인벤토리에서 원격 측정을 위한 뷰

    • Scala에서 SqlEmbeddedUsages.csv 보고서에 대한 정보 수집

    • Scala에서 SQL 변환된 코드를 리터럴로 바꿉니다

    • 임베디드 SQL 보고를 위한 라인 번호 순서 확인

  • Filled the SqlFunctionsInfo.csv with the SQL functions documented for SparkSQL and HiveSQL

  • 다음에 대한 매핑 상태를 업데이트했습니다.

    • org.apache.spark.sql.SparkSession.sparkContext from NotSupported to Transformation.

    • org.apache.spark.sql.Builder.config from NotSupported to Transformation. With this new mapping status, the SMA will remove all the usages of this function from the source code.

버전 2.1.6(2024년 9월 5일)

애플리케이션 버전 2.1.6

  • Snowpark Engines Core 버전 4.5.1의 핫픽스 변경

Spark Conversion Core Version 4.5.1

핫픽스

  • 내보낸 Databricks 노트북에서 SMA 로 생성된 임시 Databricks 노트북을 변환하는 메커니즘이 추가되었습니다.

버전 2.1.5(2024년 8월 29일)

애플리케이션 버전 2.1.5

업데이트된 기능은 다음과 같습니다.

  • 업데이트된 Spark Conversion Core: 4.3.2

Spark Conversion Core Version 4.3.2

추가됨

  • 노트북 셀에서 식별된 요소의 라인과 열을 가져오는 메커니즘(데코레이션을 통해)을 추가했습니다

  • Pyspark.sql.functions.from_json에 대한 EWI 를 추가했습니다.

  • pyspark.sql.readwriter.DataFrameReader.csv에 대한 변환을 추가했습니다.

  • Scala 파일에 쿼리 태그 메커니즘을 사용하도록 설정했습니다.

  • 상세 보고서에 코드 분석 점수 및 추가 링크가 추가되었습니다.

  • InputFilesInventory.csv에 OriginFilePath 라는 열을 추가했습니다

변경됨

  • Pyspark.sql.functions.from_json의 매핑 상태를 지원되지 않음에서 변환으로 업데이트했습니다.

  • 다음 Spark 요소의 매핑 상태를 해결 방법에서 직접으로 업데이트했습니다.

    • org.apache.sql.functions.countDistinct

    • org.apache.sql.functions.max

    • org.apache.sql.functions.min

    • org.apache.sql.functions.mean

사용되지 않음

  • 다음 EWI 코드가 사용 중단되었습니다.

    • SPRKSCL1135

    • SPRKSCL1136

    • SPRKSCL1153

    • SPRKSCL1155

수정됨

  • Spark API 점수가 잘못 계산되던 버그가 수정되었습니다.

  • 출력 폴더에 비어 있거나 설명이 있는 파일(SQL)을 복사본으로 생성하지 않는 오류를 수정했습니다.

  • DetailedReport, 노트북 통계 LOC 및 셀 수가 정확하지 않은 버그가 수정되었습니다.

버전 2.1.2(2024년 8월 14일)

애플리케이션 버전 2.1.2

업데이트된 기능은 다음과 같습니다.

  • 업데이트된 Spark Conversion Core: 4.2.0

Spark Conversion Core Version 4.2.0

추가됨

  • technology 열을 SparkUsagesInventory 에 추가합니다

  • 정의되지 않은 SQL 요소에 대해 EWI 를 추가했습니다.

  • SqlFunctions 인벤토리 추가됨

  • SqlFunctions 인벤토리에 대한 정보 수집

변경됨

  • 이제 엔진은 원본 파일을 수정 없이 그대로 두는 대신 부분적으로 구문 분석된 Python 파일을 처리하고 출력합니다.

  • 구문 분석 오류가 있는 Python 노트북 셀도 처리되어 인쇄됩니다.

수정됨

  • Fixed pandas.core.indexes.datetimes.DatetimeIndex.strftime was being reported wrongly.

  • SQL 준비도 점수와 SQL 지원 상태별 사용량 간의 불일치를 수정했습니다.

  • Fixed a bug that caused the SMA to report pandas.core.series.Series.empty with an incorrect mapping status.

  • DetailedReport.docx에서 Spark API 사용량 준비 열과 Assessment.json에서 UsagesReadyForConversion 행의 불일치를 수정했습니다.

버전 2.1.1(2024년 8월 8일)

애플리케이션 버전 2.1.1

업데이트된 기능은 다음과 같습니다.

  • 업데이트된 Spark Conversion Core: 4.1.0

Spark Conversion Core Version 4.1.0

추가됨

  • Added the following information to the AssessmentReport.json file

    • 서드 파티 라이브러리 준비도 점수입니다.

    • 식별된 서드 파티 라이브러리 호출 수입니다.

    • Snowpark에서 지원되는 서드 파티 라이브러리 호출 수입니다.

    • 서드 파티 준비도 점수, Spark API 준비도 점수 및 SQL 준비도 점수와 관련된 색상 코드입니다.

  • Transformed SqlSimpleDataType in Spark create tables.

  • Added the mapping of pyspark.sql.functions.get as direct.

  • Added the mapping of pyspark.sql.functions.to_varchar as direct.

  • 통합 후 변경 사항의 일환으로 이제 도구가 엔진에서 실행 정보 파일을 생성합니다.

  • Added a replacer for pyspark.sql.SparkSession.builder.appName.

변경됨

  • 다음 Spark 요소에 대한 매핑 상태를 업데이트했습니다

    • From Not Supported to Direct mapping: - pyspark.sql.functions.sign - pyspark.sql.functions.signum

  • 노트북 셀 목록 보고서가 열 요소에 있는 모든 셀의 내용 종류를 표시하도록 변경했습니다

  • Added a SCALA_READINESS_SCORE column that reports the readiness score as related only to references to the Spark API in Scala files.

  • Partial support to transform table properties in ALTER TABLE and ALTER VIEW

  • Updated the conversion status of the node SqlSimpleDataType from Pending to Transformation in Spark create tables

  • Updated the version of the Snowpark Scala API supported by the SMA from 1.7.0 to 1.12.1:

    • Updated the mapping status of: - org.apache.spark.sql.SparkSession.getOrCreate from Rename to Direct - org.apache.spark.sql.functions.sum from Workaround to Direct

  • Updated the version of the Snowpark Python API supported by the SMA from 1.15.0 to 1.20.0:

    • Updated the mapping status of: - pyspark.sql.functions.arrays_zip from Not Supported to Direct

  • 다음 Pandas 요소에 대한 매핑 상태를 업데이트했습니다.

    • Direct mappings: - pandas.core.frame.DataFrame.any - pandas.core.frame.DataFrame.applymap

  • 다음 Pandas 요소에 대한 매핑 상태를 업데이트했습니다.

    • From Not Supported to Direct mapping: - pandas.core.frame.DataFrame.groupby - pandas.core.frame.DataFrame.index - pandas.core.frame.DataFrame.T - pandas.core.frame.DataFrame.to_dict

    • From Not Supported to Rename mapping: - pandas.core.frame.DataFrame.map

  • 다음 Pandas 요소에 대한 매핑 상태를 업데이트했습니다.

    • Direct mappings: - pandas.core.frame.DataFrame.where - pandas.core.groupby.generic.SeriesGroupBy.agg - pandas.core.groupby.generic.SeriesGroupBy.aggregate - pandas.core.groupby.generic.DataFrameGroupBy.agg - pandas.core.groupby.generic.DataFrameGroupBy.aggregate - pandas.core.groupby.generic.DataFrameGroupBy.apply

    • Not Supported mappings: - pandas.core.frame.DataFrame.to_parquet - pandas.core.generic.NDFrame.to_csv - pandas.core.generic.NDFrame.to_excel - pandas.core.generic.NDFrame.to_sql

  • 다음 Pandas 요소에 대한 매핑 상태를 업데이트했습니다.

    • Direct mappings: - pandas.core.series.Series.empty - pandas.core.series.Series.apply - pandas.core.reshape.tile.qcut

    • Direct mappings with EWI: - pandas.core.series.Series.fillna - pandas.core.series.Series.astype - pandas.core.reshape.melt.melt - pandas.core.reshape.tile.cut - pandas.core.reshape.pivot.pivot_table

  • 다음 Pandas 요소에 대한 매핑 상태를 업데이트했습니다.

    • Direct mappings: - pandas.core.series.Series.dt - pandas.core.series.Series.groupby - pandas.core.series.Series.loc - pandas.core.series.Series.shape - pandas.core.tools.datetimes.to_datetime - pandas.io.excel._base.ExcelFile

    • Not Supported mappings: - pandas.core.series.Series.dt.strftime

  • 다음 Pandas 요소에 대한 매핑 상태를 업데이트했습니다.

    • From Not Supported to Direct mapping: - pandas.io.parquet.read_parquet - pandas.io.parsers.readers.read_csv

  • 다음 Pandas 요소에 대한 매핑 상태를 업데이트했습니다.

    • From Not Supported to Direct mapping: - pandas.io.pickle.read_pickle - pandas.io.sql.read_sql - pandas.io.sql.read_sql_query

  • SQL 준비도 점수의 이해하기에 대한 설명을 업데이트했습니다.

  • Updated PyProgramCollector to collect the packages and populate the current packages inventory with data from Python source code.

  • Updated the mapping status of pyspark.sql.SparkSession.builder.appName from Rename to Transformation.

  • 다음 Scala 통합 테스트를 제거했습니다.

    • AssesmentReportTest_AssessmentMode.ValidateReports_AssessmentMode

    • AssessmentReportTest_PythonAndScala_Files.ValidateReports_PythonAndScala

    • AssessmentReportTestWithoutSparkUsages.ValidateReports_WithoutSparkUsages

  • Updated the mapping status of pandas.core.generic.NDFrame.shape from Not Supported to Direct.

  • Updated the mapping status of pandas.core.series from Not Supported to Direct.

사용되지 않음

  • Deprecated the EWI code SPRKSCL1160 since org.apache.spark.sql.functions.sum is now a direct mapping.

수정됨

  • Jupyter Notebook 셀에서 인자가 없는 Custom Magics을 지원하지 않는 버그를 수정했습니다.

  • 구문 분석 오류가 발생할 때 issues.csv 보고서에서 EWIs 가 잘못 생성되던 문제를 수정했습니다.

  • SMA 가 내보낸 Databricks 노트북을 Databricks 노트북으로 처리하지 않던 버그를 수정했습니다.

  • 패키지 오브젝트 내에서 생성된 선언의 유형 이름이 충돌하는 것을 처리하는 동안 스택 오버플로 오류가 수정되었습니다.

  • Fixed the processing of complex lambda type names involving generics, e.g., def func[X,Y](f: (Map[Option[X], Y] => Map[Y, X]))...

  • SMA 가 아직 인식되지 않은 Pandas 요소에 Pandas EWI 코드 대신 PySpark EWI 코드를 추가하는 버그를 수정했습니다.

  • 세부 보고서 템플릿의 오타 수정: 열 이름을 “Percentage of all Python Files”에서 “Percentage of all files”로 변경했습니다.

  • Fixed a bug where pandas.core.series.Series.shape was wrongly reported.