Snowpark Migration Accelerator : Codes de problème pour Spark - Scala¶
SPRKSCL1126¶
Message : org.apache.spark.sql.functions.covar_pop a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.covar_pop function, which has a workaround.
Entrée
Below is an example of the org.apache.spark.sql.functions.covar_pop function, first used with column names as the arguments and then with column objects.
Sortie
The SMA adds the EWI SPRKSCL1126 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent covar_pop function that receives two column objects as arguments. For that reason, the Spark overload that receives two column objects as arguments is directly supported by Snowpark and does not require any changes.
For the overload that receives two string arguments, you can convert the strings into column objects using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1112¶
Message: *spark element* is not supported
Catégorie : Erreur de conversion
Description¶
Ce problème apparaît lorsque SMA détecte l’utilisation d’un élément Spark qui n’est pas pris en charge par Snowpark, et qui n’a pas son propre code d’erreur associé. Il s’agit d’un code d’erreur générique utilisé par SMA pour tout élément Spark non pris en charge.
Scénario¶
Entrée
Vous trouverez ci-dessous un exemple d’élément Spark qui n’est pas pris en charge par Snowpark et qui génère donc cet EWI.
Sortie
The SMA adds the EWI SPRKSCL1112 to the output code to let you know that this element is not supported by Snowpark.
Correction recommandée
Comme il s’agit d’un code d’erreur générique qui s’applique à une série de fonctions non prises en charge, il n’existe pas de correction unique et spécifique. L’action appropriée dépend de l’élément utilisé.
Veuillez noter que même si l’élément n’est pas pris en charge, cela ne signifie pas nécessairement qu’il est impossible de trouver une solution ou une solution de contournement. Cela signifie seulement que l’outil SMA lui-même ne peut pas trouver la solution.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1143¶
Message : Une erreur s’est produite lors du chargement de la table des symboles
Catégorie : Erreur de conversion
Description¶
Ce problème apparaît en cas d’erreur de chargement des symboles dans la table des symboles SMA. La table des symboles fait partie de l’architecture sous-jacente de l’outil SMA, ce qui permet des conversions plus complexes.
Recommandations supplémentaires¶
This is unlikely to be an error in the source code itself, but rather is an error in how the SMA processes the source code. The best resolution would be to post an issue in the SMA.
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1153¶
Avertissement
This issue code has been deprecated since Spark Conversion Core Version 4.3.2
Message : org.apache.spark.sql.functions.max a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.max function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.max function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1153 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent max function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1102¶
This issue code has been deprecated since Spark Conversion Core 2.3.22
Message : Explode n’est pas pris en charge
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.explode function, which is not supported by Snowpark.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.explode function used to get the consolidated information of the array fields of the dataset.
Sortie
The SMA adds the EWI SPRKSCL1102 to the output code to let you know that this function is not supported by Snowpark.
Correction recommandée
Since explode is not supported by Snowpark, the function flatten could be used as a substitute.
La correction suivante crée un aplatissement du dataframe dfExplode, puis effectue la requête pour répliquer le résultat dans Spark.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1136¶
Avertissement
This issue code is deprecated since Spark Conversion Core 4.3.2
Message : org.apache.spark.sql.functions.min a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.min function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.min function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1136 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent min function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that takes a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1167¶
Message : Fichier de projet introuvable dans le dossier d’entrée
Catégorie : Avertissement
Description¶
Ce problème apparaît lorsque l’outil SMA détecte que le dossier d’entrée ne contient pas de fichier de configuration de projet. Les fichiers de configuration de projet pris en charge par l’outil SMA sont les suivants :
build.sbt
build.gradle
pom.xml
Recommandations supplémentaires¶
Inclure un fichier de projet de configuration dans le dossier d’entrée.
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1147¶
Message : org.apache.spark.sql.functions.tanh a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.tanh function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.tanh function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1147 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent tanh function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1116¶
Avertissement
This issue code has been deprecated since Spark Conversion Core Version 2.40.1
Message : org.apache.spark.sql.functions.split a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.split function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.split function that generates this EWI.
Sortie
The SMA adds the EWI SPRKSCL1116 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
For the Spark overload that receives two arguments, you can convert the second argument into a column object using the com.snowflake.snowpark.functions.lit function as a workaround.
La surcharge qui reçoit trois arguments n’est pas encore prise en charge par Snowpark et il n’y a pas de solution de contournement.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1122¶
Message : org.apache.spark.sql.functions.corr a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.corr function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.corr function, first used with column names as the arguments and then with column objects.
Sortie
The SMA adds the EWI SPRKSCL1122 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent corr function that receives two column objects as arguments. For that reason, the Spark overload that receives column objects as arguments is directly supported by Snowpark and does not require any changes.
For the overload that receives two string arguments, you can convert the strings into column objects using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1173¶
Message : Le code intégré SQL ne peut pas être traité.
Catégorie : Avertissement
Description¶
Ce problème apparaît lorsque l’outil SMA détecte un code intégré SQL qui ne peut pas être traité. Dans ce cas, le code intégré SQLne peut pas être converti en Snowflake.
Scénario¶
Entrée
Vous trouverez ci-dessous un exemple de code intégré SQL qui ne peut pas être traité.
Sortie
The SMA adds the EWI SPRKSCL1173 to the output code to let you know that the SQL-embedded code can not be processed.
Correction recommandée
Assurez-vous que le code intégré SQL est une chaîne sans interpolations, variables ou concaténations de chaînes.
Recommandations supplémentaires¶
You can find more information about SQL-embedded here.
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1163¶
Message : L’élément n’est pas un littéral et ne peut pas être évalué.
Catégorie : Erreur de conversion
Description¶
Ce problème survient lorsque l’élément de traitement actuel n’est pas un littéral et qu’il ne peut donc pas être évalué par SMA.
Scénario¶
Entrée
Voici un exemple où l’élément à traiter n’est pas un littéral et ne peut pas être évalué par SMA.
Sortie
The SMA adds the EWI SPRKSCL1163 to the output code to let you know that format_type parameter is not a literal and it can not be evaluated by the SMA.
Correction recommandée
Assurez-vous que la valeur de la variable est valide afin d’éviter des comportements inattendus.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1132¶
Message : org.apache.spark.sql.functions.grouping_id a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.grouping_id function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.grouping_id function, first used with multiple column name as arguments and then with column objects.
Sortie
The SMA adds the EWI SPRKSCL1132 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent grouping_id function that receives multiple column objects as arguments. For that reason, the Spark overload that receives multiple column objects as arguments is directly supported by Snowpark and does not require any changes.
For the overload that receives multiple string arguments, you can convert the strings into column objects using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1106¶
Avertissement
Ce code de problème est maintenant obsolète.
Message : L’option Rédacteur n’est pas prise en charge.
Catégorie : Erreur de conversion
Description¶
Ce problème apparaît lorsque l’outil détecte, dans l’instruction du rédacteur, l’utilisation d’une option non prise en charge par Snowpark.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.DataFrameWriter.option used to add options to a writer statement.
Sortie
The SMA adds the EWI SPRKSCL1106 to the output code to let you know that the option method is not supported by Snowpark.
Correction recommandée
Il n’y a pas de correction recommandée pour ce scénario.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1157¶
Message : org.apache.spark.sql.functions.kurtosis a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.kurtosis function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.kurtosis function that generates this EWI. In this example, the kurtosis function is used to calculate the kurtosis of selected column.
Sortie
The SMA adds the EWI SPRKSCL1157 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent kurtosis function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1146¶
Message : org.apache.spark.sql.functions.tan a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.tan function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.tan function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1146 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent tan function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1117¶
Avertissement
This issue code is deprecated since Spark Conversion Core 2.40.1
Message : org.apache.spark.sql.functions.translate a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.translate function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.translate function that generates this EWI. In this example, the translate function is used to replace the characters “a”, “e” and “o” in each word with “1”, “2” and “3”, respectively.
Sortie
The SMA adds the EWI SPRKSCL1117 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
As a workaround, you can convert the second and third argument into a column object using the com.snowflake.snowpark.functions.lit function.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1123¶
Message : org.apache.spark.sql.functions.cos a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.cos function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.cos function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1123 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent cos function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1172¶
Message : Snowpark ne prend en charge StructFiled avec le paramètre des métadonnées.
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects that org.apache.spark.sql.types.StructField.apply with org.apache.spark.sql.types.Metadata as parameter. This is because Snowpark does not supported the metadata parameter.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.types.StructField.apply function that generates this EWI. In this example, the apply function is used to generate and instance of StructField.
Sortie
The SMA adds the EWI SPRKSCL1172 to the output code to let you know that metadata parameter is not supported by Snowflake.
Correction recommandée
Snowpark has an equivalent com.snowflake.snowpark.types.StructField.apply function that receives three parameters. Then, as workaround, you can try to remove the metadata argument.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1162¶
Note
Ce code de problème est maintenant obsolète.
Message : Une erreur s’est produite lors de l’extraction des fichiers dbc.
Catégorie : Avertissement
Description¶
Ce problème apparaît lorsqu’un fichier dbc ne peut pas être extrait. Cet avertissement peut être dû à une ou plusieurs des raisons suivantes : Trop lourd, inaccessible, en lecture seule, etc.
Recommandations supplémentaires¶
En guise de solution de contournement, vous pouvez vérifier la taille du fichier s’il est trop lourd pour être traité. Analysez également si l’outil peut y accéder afin d’éviter tout problème d’accès.
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1133¶
Message : org.apache.spark.sql.functions.least a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.least function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.least function, first used with multiple column name as arguments and then with column objects.
Sortie
The SMA adds the EWI SPRKSCL1133 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent least function that receives multiple column objects as arguments. For that reason, the Spark overload that receives multiple column objects as arguments is directly supported by Snowpark and does not require any changes.
For the overload that receives multiple string arguments, you can convert the strings into column objects using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1107¶
Avertissement
Ce code de problème est maintenant obsolète.
Message : La méthode d’enregistrement du rédacteur n’est pas prise en charge.
Catégorie : Erreur de conversion
Description¶
Ce problème apparaît lorsque l’outil détecte, dans l’instruction du rédacteur, l’utilisation d’une option d’enregistrement du rédacteur non prise en charge par Snowpark.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.DataFrameWriter.save used to save the DataFrame content.
Sortie
The SMA adds the EWI SPRKSCL1107 to the output code to let you know that the save method is not supported by Snowpark.
Correction recommandée
Il n’y a pas de correction recommandée pour ce scénario.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1156¶
Message : org.apache.spark.sql.functions.degrees a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.degrees function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.degrees function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1156 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent degrees function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1127¶
Message : org.apache.spark.sql.functions.covar_samp a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.covar_samp function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.covar_samp function, first used with column names as the arguments and then with column objects.
Sortie
The SMA adds the EWI SPRKSCL1127 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent covar_samp function that receives two column objects as arguments. For that reason, the Spark overload that receives two column objects as arguments is directly supported by Snowpark and does not require any changes.
For the overload that receives two string arguments, you can convert the strings into column objects using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1113¶
Message : org.apache.spark.sql.functions.next_day a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.next_day function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.next_day function, first used with a string as the second argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1113 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent next_day function that receives two column objects as arguments. For that reason, the Spark overload that receives two column objects as arguments is directly supported by Snowpark and does not require any changes.
For the overload that receives a column object and a string, you can convert the string into a column object using the com.snowflake.snowpark.functions.lit function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1002¶
Message: This code section has recovery from parsing errors *statement*
Catégorie : Erreur d’analyse.
Description¶
Ce problème apparaît lorsque l’outil SMA détecte une instruction qui ne peut pas être lue ou comprise correctement dans le code d’un fichier, il s’agit d’une erreur d’analyse. Toutefois, l’outil SMA peut sortir de cette erreur d’analyse et continuer à analyser le code du fichier. Dans ce cas, SMA est en mesure de traiter le code du fichier sans erreur.
Scénario¶
Entrée
Vous trouverez ci-dessous un exemple de code Scala non valide d’où l’outil SMA peut sortir.
Sortie
The SMA adds the EWI SPRKSCL1002 to the output code to let you know that the code of the file has parsing errors, however the SMA can recovery from that error and continue analyzing the code of the file.
Correction recommandée
Puisque le message identifie l’erreur dans l’instruction, vous pouvez essayer d’identifier la syntaxe non valide et de la supprimer ou de commenter l’instruction pour éviter l’erreur d’analyse.
Recommandations supplémentaires¶
Vérifiez que le code du fichier est un code Scala valide.
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1142¶
Message: *spark element* is not defined
Catégorie : Erreur de conversion
Description¶
Ce problème apparaît lorsque l’outil SMA n’a pas pu déterminer un statut de mappage approprié pour l’élément donné. Cela signifie que SMA ne sait pas encore si cet élément est pris en charge ou non par Snowpark. Veuillez noter qu’il s’agit d’un code d’erreur générique utilisé par SMA pour tout élément non défini.
Scénario¶
Entrée
Below is an example of a function for which the SMA could not determine an appropriate mapping status, and therefore it generated this EWI. In this case, you should assume that notDefinedFunction() is a valid Spark function and the code runs.
Sortie
The SMA adds the EWI SPRKSCL1142 to the output code to let you know that this element is not defined.
Correction recommandée
Pour tenter d’identifier le problème, vous pouvez effectuer les validations suivantes :
Vérifiez s’il s’agit d’un élément Spark valide.
Vérifiez que la syntaxe de l’élément est correcte et qu’il est correctement orthographié.
Vérifiez que vous utilisez une version de Spark prise en charge par SMA.
If this is a valid Spark element, please report that you encountered a conversion error on that particular element using the Report an Issue option of the SMA and include any additional information that you think may be helpful.
Please note that if an element is not defined by the SMA, it does not mean necessarily that it is not supported by Snowpark. You should check the Snowpark Documentation to verify if an equivalent element exist.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1152¶
Message : org.apache.spark.sql.functions.variance a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.variance function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.variance function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1152 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent variance function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1103¶
Ce code de problème est maintenant obsolète.
Message: SparkBuilder method is not supported *method name*
Catégorie : Erreur de conversion
Description¶
Ce problème apparaît lorsque le SMA détecte une méthode qui n’est pas prise en charge par Snowflake dans le chaînage de méthodes SparkBuilder. Par conséquent, cela pourrait affecter la migration de l’instruction du lecteur.
Les méthodes suivantes ne sont pas des méthodes SparkBuilder prises en charge :
master
appName
enableHiveSupport
withExtensions
Scénario¶
Entrée
Vous trouverez ci-dessous un exemple de chaînage de méthodes SparkBuilder dont de nombreuses méthodes ne sont pas prises en charge par Snowflake.
Sortie
The SMA adds the EWI SPRKSCL1103 to the output code to let you know that master, appName and enableHiveSupport methods are not supported by Snowpark. Then, it might affects the migration of the Spark Session statement.
Correction recommandée
Pour créer la session, il est exigé d’ajouter la configuration Snowflake Snowpark appropriée.
Dans cet exemple, une variable configs est utilisée.
Il est également recommandé d’utiliser un fichier de configuration (profile.properties) contenant les informations relatives à la connexion :
And with the Session.builder.configFile the session can be created:
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1137¶
Message : org.apache.spark.sql.functions.sin a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.sin function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.sin function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1137 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent sin function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1166¶
Note
Ce code de problème est maintenant obsolète.
Messag : org.apache.spark.sql.DataFrameReader.format n’est pas pris en charge.
Catégorie : Avertissement
Description¶
This issue appears when the org.apache.spark.sql.DataFrameReader.format has an argument that is not supported by Snowpark.
Scénarios¶
There are some scenarios depending on the type of format you are trying to load. It can be a supported, or non-supported format.
Scénario 1¶
Entrée
L’outil analyse le type de format que vous essayez de charger. Les formats pris en charge sont les suivants :
csvjsonorcparquettext
The below example shows how the tool transforms the format method when passing a csv value.
Sortie
The tool transforms the format method into a csv method call when load function has one parameter.
Correction recommandée
Dans ce cas, l’outil n’affiche pas l’EWI, ce qui signifie qu’aucune correction n’est nécessaire.
Scénario 2¶
Entrée
The below example shows how the tool transforms the format method when passing a net.snowflake.spark.snowflake value.
Sortie
The tool shows the EWI SPRKSCL1166 indicating that the value net.snowflake.spark.snowflake is not supported.
Correction recommandée
For the not supported scenarios there is no specific fix since it depends on the files that are trying to be read.
Scénario 3¶
Entrée
The below example shows how the tool transforms the format method when passing a csv, but using a variable instead.
Sortie
Since the tool can not determine the value of the variable in runtime, shows the EWI SPRKSCL1163 indicating that the value is not supported.
Correction recommandée
As a workaround, you can check the value of the variable and add it as a string to the format call.
Recommandations supplémentaires¶
The Snowpark location only accepts cloud locations using a snowflake stage.
The documentation of methods supported by Snowpark can be found in the documentation
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1118¶
Message : org.apache.spark.sql.functions.trunc a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.trunc function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.trunc function that generates this EWI.
Sortie
The SMA adds the EWI SPRKSCL1118 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
As a workaround, you can convert the second argument into a column object using the com.snowflake.snowpark.functions.lit function.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1149¶
Message : org.apache.spark.sql.functions.toRadians a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.toRadians function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.toRadians function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1149 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
As a workaround, you can use the radians function. For the Spark overload that receives a string argument, you additionally have to convert the string into a column object using the com.snowflake.snowpark.functions.col function.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1159¶
Message : org.apache.spark.sql.functions.stddev_samp a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.stddev_samp function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.stddev_samp function that generates this EWI. In this example, the stddev_samp function is used to calculate the sample standard deviation of selected column.
Sortie
The SMA adds the EWI SPRKSCL1159 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent stddev_samp function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1108¶
Note
Ce code de problème est maintenant obsolète.
Messag : org.apache.spark.sql.DataFrameReader.format n’est pas pris en charge.
Catégorie : Avertissement
Description¶
This issue appears when the org.apache.spark.sql.DataFrameReader.format has an argument that is not supported by Snowpark.
Scénarios¶
There are some scenarios depending on the type of format you are trying to load. It can be a supported, or non-supported format.
Scénario 1¶
Entrée
L’outil analyse le type de format que vous essayez de charger. Les formats pris en charge sont les suivants :
csvjsonorcparquettext
The below example shows how the tool transforms the format method when passing a csv value.
Sortie
The tool transforms the format method into a csv method call when load function has one parameter.
Correction recommandée
Dans ce cas, l’outil n’affiche pas l’EWI, ce qui signifie qu’aucune correction n’est nécessaire.
Scénario 2¶
Entrée
The below example shows how the tool transforms the format method when passing a net.snowflake.spark.snowflake value.
Sortie
The tool shows the EWI SPRKSCL1108 indicating that the value net.snowflake.spark.snowflake is not supported.
Correction recommandée
For the not supported scenarios there is no specific fix since it depends on the files that are trying to be read.
Scénario 3¶
Entrée
The below example shows how the tool transforms the format method when passing a csv, but using a variable instead.
Sortie
Since the tool can not determine the value of the variable in runtime, shows the EWI SPRKSCL1163 indicating that the value is not supported.
Correction recommandée
As a workaround, you can check the value of the variable and add it as a string to the format call.
Recommandations supplémentaires¶
The Snowpark location only accepts cloud locations using a snowflake stage.
The documentation of methods supported by Snowpark can be found in the documentation
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1128¶
Message : org.apache.spark.sql.functions.exp a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.exp function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.exp function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1128 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent exp function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1169¶
Message: *Spark element* is missing on the method chaining.
Catégorie : Avertissement
Description¶
Ce problème apparaît lorsque l’outil SMA détecte qu’il manque un appel d’élément Spark dans le chaînage de méthodes. SMA doit connaître cet élément Spark pour analyser l’instruction.
Scénario¶
Entrée
Vous trouverez ci-dessous un exemple où il manque un appel de fonction de chargement dans le chaînage de méthodes.
Sortie
The SMA adds the EWI SPRKSCL1169 to the output code to let you know that load function call is missing on the method chaining and SMA can not analyze the statement.
Correction recommandée
Veillez à ce que tous les appels de fonction du chaînage de méthodes se trouvent dans la même instruction.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1138¶
Message : org.apache.spark.sql.functions.sinh a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.sinh function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.sinh function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1138 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent sinh function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1129¶
Message : org.apache.spark.sql.functions.floor a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.floor function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.floor function, first used with a column name as an argument, then with a column object and finally with two column objects.
Sortie
The SMA adds the EWI SPRKSCL1129 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent floor function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
For the overload that receives a column object and a scale, you can use the callBuiltin function to invoke the Snowflake builtin FLOOR function. To use it, you should pass the string « floor » as the first argument, the column as the second argument and the scale as the third argument.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1168¶
Message: *Spark element* with argument(s) value(s) *given arguments* is not supported.
Catégorie : Avertissement
Description¶
Ce problème apparaît lorsque l’outil SMA détecte que l’élément Spark avec les paramètres donnés n’est pas pris en charge.
Scénario¶
Entrée
Vous trouverez ci-dessous un exemple d’élément Spark dont le paramètre n’est pas pris en charge.
Sortie
The SMA adds the EWI SPRKSCL1168 to the output code to let you know that Spark element with the given parameter is not supported.
Correction recommandée
Il n’existe pas de correction spécifique pour ce scénario.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1139¶
Message : org.apache.spark.sql.functions.sqrt a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.sqrt function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.sqrt function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1139 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent sqrt function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1119¶
Message : org.apache.spark.sql.Column.endsWith a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.Column.endsWith function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.Column.endsWith function, first used with a literal string argument and then with a column object argument.
Sortie
The SMA adds the EWI SPRKSCL1119 to the output code to let you know that this function is not directly supported by Snowpark, but it has a workaround.
Correction recommandée
As a workaround, you can use the com.snowflake.snowpark.functions.endswith function, where the first argument would be the column whose values will be checked and the second argument the suffix to check against the column values. Please note that if the argument of the Spark’s endswith function is a literal string, you should convert it into a column object using the com.snowflake.snowpark.functions.lit function.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1148¶
Message : org.apache.spark.sql.functions.toDegrees a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.toDegrees function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.toDegrees function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1148 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
As a workaround, you can use the degrees function. For the Spark overload that receives a string argument, you additionally have to convert the string into a column object using the com.snowflake.snowpark.functions.col function.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1158¶
Message : org.apache.spark.sql.functions.skewness a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.skewness function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.skewness function that generates this EWI. In this example, the skewness function is used to calculate the skewness of selected column.
Sortie
The SMA adds the EWI SPRKSCL1158 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent skew function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1109¶
Note
Ce code de problème est maintenant obsolète.
Message : Le paramètre n’est pas défini pour org.apache.spark.sql.DataFrameReader.option
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects that giving parameter of org.apache.spark.sql.DataFrameReader.option is not defined.
Scénario¶
Entrée
Below is an example of undefined parameter for org.apache.spark.sql.DataFrameReader.option function.
Sortie
The SMA adds the EWI SPRKSCL1109 to the output code to let you know that giving parameter to the org.apache.spark.sql.DataFrameReader.option function is not defined.
Correction recommandée
Check the Snowpark documentation for reader format option here, in order to identify the defined options.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1114¶
Message : org.apache.spark.sql.functions.repeat a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.repeat function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.repeat function that generates this EWI.
Sortie
The SMA adds the EWI SPRKSCL1114 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
As a workaround, you can convert the second argument into a column object using the com.snowflake.snowpark.functions.lit function.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1145¶
Message : org.apache.spark.sql.functions.sumDistinct a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.sumDistinct function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.sumDistinct function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1145 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
As a workaround, you can use the sum_distinct function. For the Spark overload that receives a string argument, you additionally have to convert the string into a column object using the com.snowflake.snowpark.functions.col function.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1171¶
Message : Snowpark ne prend pas en charge les fonctions de fractionnement avec plus de deux paramètres ou contenant un modèle regex. Consultez la documentation pour plus d’informations.
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects that org.apache.spark.sql.functions.split has more than two parameters or containing regex pattern.
Scénarios¶
The split function is used to separate the given column around matches of the given pattern. This Spark function has three overloads.
Scénario 1¶
Entrée
Below is an example of the org.apache.spark.sql.functions.split function that generates this EWI. In this example, the split function has two parameters and the second argument is a string, not a regex pattern.
Sortie
The SMA adds the EWI SPRKSCL1171 to the output code to let you know that this function is not fully supported by Snowpark.
Correction recommandée
Snowpark has an equivalent split function that receives a column object as a second argument. For that reason, the Spark overload that receives a string argument in the second argument, but it is not a regex pattern, can convert the string into a column object using the com.snowflake.snowpark.functions.lit function as a workaround.
Scénario 2¶
Entrée
Below is an example of the org.apache.spark.sql.functions.split function that generates this EWI. In this example, the split function has two parameters and the second argument is a regex pattern.
Sortie
The SMA adds the EWI SPRKSCL1171 to the output code to let you know that this function is not fully supported by Snowpark because regex patterns are not supported by Snowflake.
Correction recommandée
Étant donné que Snowflake ne prend pas en charge les modèles regex, essayez de remplacer le modèle par une chaîne de modèle non regex.
Scénario 3¶
Entrée
Below is an example of the org.apache.spark.sql.functions.split function that generates this EWI. In this example, the split function has more than two parameters.
Sortie
The SMA adds the EWI SPRKSCL1171 to the output code to let you know that this function is not fully supported by Snowpark, because Snowflake does not have a split function with more than two parameters.
Correction recommandée
Étant donné que Snowflake ne prend pas en charge la fonction de fractionnement avec plus de deux paramètres, essayez d’utiliser la fonction de fractionnement prise en charge par Snowflake.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1120¶
Message : org.apache.spark.sql.functions.asin a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.asin function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.asin function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1120 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent asin function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1130¶
Message : org.apache.spark.sql.functions.greatest a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.greatest function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.greatest function, first used with multiple column names as arguments and then with multiple column objects.
Sortie
The SMA adds the EWI SPRKSCL1130 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent greatest function that receives multiple column objects as arguments. For that reason, the Spark overload that receives column objects as arguments is directly supported by Snowpark and does not require any changes.
For the overload that receives multiple string arguments, you can convert the strings into column objects using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
description : >- Snowpark et Snowpark Extensions n’ont pas été ajoutés au fichier de configuration du projet.
SPRKSCL1161¶
Message : L’ajout de dépendances a échoué.
Catégorie : Erreur de conversion
Description¶
Ce problème survient lorsque SMA détecte une version Spark dans le fichier de configuration du projet qui n’est pas prise en charge par SMA. Par conséquent, SMA n’a pas pu ajouter les dépendances Snowpark et Snowpark Extensions au fichier de configuration du projet correspondant. Si les dépendances de Snowpark ne sont pas ajoutées, le code migré ne sera pas compilé.
Scénarios¶
Il y a trois scénarios possibles : sbt, gradle et pom.xml. L’outil SMA tente de traiter le fichier de configuration du projet en supprimant les dépendances Spark et en ajoutant les dépendances Snowpark et Snowpark Extensions.
Scénario 1¶
Entrée
Below is an example of the dependencies section of a sbt project configuration file.
Sortie
The SMA adds the EWI SPRKSCL1161 to the issues inventory since the Spark version is not supported and keeps the output the same.
Correction recommandée
Manually, remove the Spark dependencies and add Snowpark and Snowpark Extensions dependencies to the sbt project configuration file.
Veillez à utiliser la version de Snowpark qui répond le mieux aux exigences de votre projet.
Scénario 2¶
Entrée
Below is an example of the dependencies section of a gradle project configuration file.
Sortie
The SMA adds the EWI SPRKSCL1161 to the issues inventory since the Spark version is not supported and keeps the output the same.
Correction recommandée
Manually, remove the Spark dependencies and add Snowpark and Snowpark Extensions dependencies to the gradle project configuration file.
Assurez-vous que la version des dépendances correspond aux besoins de votre projet.
Scénario 3¶
Entrée
Below is an example of the dependencies section of a pom.xml project configuration file.
Sortie
The SMA adds the EWI SPRKSCL1161 to the issues inventory since the Spark version is not supported and keeps the output the same.
Correction recommandée
Manually, remove the Spark dependencies and add Snowpark and Snowpark Extensions dependencies to the gradle project configuration file.
Assurez-vous que la version des dépendances correspond aux besoins de votre projet.
Recommandations supplémentaires¶
Assurez-vous que l’entrée dispose d’un fichier de configuration du projet :
build.sbt
build.gradle
pom.xml
La version de Spark prise en charge par SMA est 2.12:3.1.2
You can check the latest Snowpark version here.
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1155¶
Avertissement
This issue code has been deprecated since Spark Conversion Core Version 4.3.2
Message : org.apache.spark.sql.functions.countDistinct a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.countDistinct function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.countDistinct function, first used with column names as arguments and then with column objects.
Sortie
The SMA adds the EWI SPRKSCL1155 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
As a workaround, you can use the count_distinct function. For the Spark overload that receives string arguments, you additionally have to convert the strings into column objects using the com.snowflake.snowpark.functions.col function.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1104¶
Ce code de problème est maintenant obsolète.
Message : L’option du constructeur de la session Spark n’est pas prise en charge.
Catégorie : Erreur de conversion
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.SparkSession.Builder.config function, which is setting an option of the Spark Session and it is not supported by Snowpark.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.SparkSession.Builder.config function used to set an option in the Spark Session.
Sortie
The SMA adds the EWI SPRKSCL1104 to the output code to let you know config method is not supported by Snowpark. Then, it is not possible to set options in the Spark Session via config function and it might affects the migration of the Spark Session statement.
Correction recommandée
Pour créer la session, il est exigé d’ajouter la configuration Snowflake Snowpark appropriée.
Dans cet exemple, une variable configs est utilisée.
Il est également recommandé d’utiliser un fichier de configuration (profile.properties) contenant les informations relatives à la connexion :
And with the Session.builder.configFile the session can be created:
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1124¶
Message : org.apache.spark.sql.functions.cosh a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.cosh function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.cosh function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1124 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent cosh function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1175¶
Message: The two-parameter udf function is not supported in Snowpark. It should be converted into a single-parameter udf function. Please check the documentation to learn how to manually modify the code to make it work in Snowpark.
Catégorie : Erreur de conversion
Description¶
This issue appears when the SMA detects an use of the two-parameter org.apache.spark.sql.functions.udf function in the source code, because Snowpark does not have an equivalent two-parameter udf function, then the output code might not compile.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.udf function that generates this EWI. In this example, the udf function has two parameters.
Sortie
The SMA adds the EWI SPRKSCL1175 to the output code to let you know that the udf function is not supported, because it has two parameters.
Correction recommandée
Snowpark only supports the single-parameter udf function (without the return type parameter), so you should convert your two-parameter udf function into a single-parameter udf function in order to make it work in Snowpark.
Par exemple, pour l’exemple de code mentionné ci-dessus, vous devrez le convertir manuellement en ceci :
Please note that there are some caveats about creating udf in Snowpark that might require you to make some additional manual changes to your code. Please check this other recommendations here related with creating single-parameter udf functions in Snowpark for more details.
Recommandations supplémentaires¶
To learn more about how to create user-defined functions in Snowpark, please refer to the following documentation: Creating User-Defined Functions (UDFs) for DataFrames in Scala
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1001¶
Message: This code section has parsing errors. The parsing error was found at: line *line number*, column *column number*. When trying to parse *statement*. This file was not converted, so it is expected to still have references to the Spark API.
Catégorie : Erreur d’analyse.
Description¶
Ce problème apparaît lorsque l’outil SMA détecte une instruction qui ne peut pas être lue ou comprise correctement dans le code d’un fichier, il s’agit d’une erreur d’analyse. En outre, ce problème apparaît lorsqu’un fichier présente une ou plusieurs erreurs d’analyse.
Scénario¶
Entrée
Vous trouverez ci-dessous un exemple de code Scala non valide.
Sortie
The SMA adds the EWI SPRKSCL1001 to the output code to let you know that the code of the file has parsing errors. Therefore, SMA is not able to process a file with this error.
Correction recommandée
Puisque le message identifie l’erreur dans l’instruction, vous pouvez essayer d’identifier la syntaxe non valide et de la supprimer ou de commenter l’instruction pour éviter l’erreur d’analyse.
Recommandations supplémentaires¶
Vérifiez que le code du fichier est un code Scala valide.
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1141¶
Message : org.apache.spark.sql.functions.stddev_pop a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.stddev_pop function, which has a workaround.
Scénario¶
Below is an example of the org.apache.spark.sql.functions.stddev_pop function, first used with a column name as an argument and then with a column object.
Entrée
Sortie
The SMA adds the EWI SPRKSCL1141 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent stddev_pop function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1110¶
Note
Ce code de problème est maintenant obsolète.
Message: Reader method not supported *method name*.
Catégorie : Avertissement
Description¶
Ce problème apparaît lorsque le SMA détecte une méthode qui n’est pas prise en charge par Snowflake dans le chaînage de méthodes DataFrameReader. Par conséquent, cela pourrait affecter la migration de l’instruction du lecteur.
Scénario¶
Entrée
Vous trouverez ci-dessous un exemple de chaînage de méthodes DataFrameReader dont la méthode de chargement n’est pas prise en charge par Snowflake.
Sortie
The SMA adds the EWI SPRKSCL1110 to the output code to let you know that load method is not supported by Snowpark. Then, it might affects the migration of the reader statement.
Correction recommandée
Check the Snowpark documentation for reader here, in order to know the supported methods by Snowflake.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1100¶
This issue code has been deprecated since Spark Conversion Core 2.3.22
Message : La répartition n’est pas prise en charge.
Catégorie : Erreur d’analyse.
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.DataFrame.repartition function, which is not supported by Snowpark. Snowflake manages the storage and the workload on the clusters making repartition operation inapplicable.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.DataFrame.repartition function used to return a new DataFrame partitioned by the given partitioning expressions.
Sortie
The SMA adds the EWI SPRKSCL1100 to the output code to let you know that this function is not supported by Snowpark.
Correction recommandée
Snowflake gère le stockage et la charge de travail sur les clusters, ce qui rend l’opération de répartition inapplicable. Cela signifie que l’utilisation de la répartition avant la jointure n’est pas du tout requise.
Recommandations supplémentaires¶
The Snowflake’s architecture guide provides insight about Snowflake storage management.
Snowpark Dataframe reference could be useful in how to adapt a particular scenario without the need of repartition.
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1151¶
Message : org.apache.spark.sql.functions.var_samp a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.var_samp function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.var_samp function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1151 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent var_samp function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
description : >- Le format du lecteur sur le chaînage des méthodes DataFrameReader n’est pas l’un de ceux définis par Snowpark.
SPRKSCL1165¶
Message : Le format du lecteur sur DataFrameReader ne peut pas être défini
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects that format of the reader in DataFrameReader method chaining is not one of the following supported for Snowpark: avro, csv, json, orc, parquet and xml. Therefore, the SMA can not determine if setting options are defined or not.
Scénario¶
Entrée
Vous trouverez ci-dessous un exemple de chaînage de méthodes DataFrameReader où SMA peut déterminer le format du lecteur.
Sortie
The SMA adds the EWI SPRKSCL1165 to the output code to let you know that format of the reader can not be determine in the giving DataFrameReader method chaining.
Correction recommandée
Check the Snowpark documentation here to get more information about format of the reader.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1134¶
Message : org.apache.spark.sql.functions.tanh a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.log function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.log function that generates this EWI.
Sortie
The SMA adds the EWI SPRKSCL1134 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Below are the different workarounds for all the overloads of the log function.
1. def log(base: Double, columnName: String) : Colonne
You can convert the base into a column object using the com.snowflake.snowpark.functions.lit function and convert the column name into a column object using the com.snowflake.snowpark.functions.col function.
2. def log(base: Double, a: Column) : Colonne
You can convert the base into a column object using the com.snowflake.snowpark.functions.lit function.
3.def log(columnName: String) : Colonne
You can pass lit(Math.E) as the first argument and convert the column name into a column object using the com.snowflake.snowpark.functions.col function and pass it as the second argument.
4. def log(e: Column) : Colonne
You can pass lit(Math.E) as the first argument and the column object as the second argument.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1125¶
Avertissement
This issue code is deprecated since Spark Conversion Core 2.9.0
Message : org.apache.spark.sql.functions.count a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.count function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.count function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1125 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent count function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1174¶
Message: The single-parameter udf function is supported in Snowpark but it might require manual intervention. Please check the documentation to learn how to manually modify the code to make it work in Snowpark.
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects an use of the single-parameter org.apache.spark.sql.functions.udf function in the code. Then, it might require a manual intervention.
The Snowpark API provides an equivalent com.snowflake.snowpark.functions.udf function that allows you to create a user-defined function from a lambda or function in Scala, however, there are some caveats about creating udf in Snowpark that might require you to make some manual changes to your code in order to make it work properly.
Scénarios¶
The Snowpark udf function should work as intended for a wide range of cases without requiring manual intervention. However, there are some scenarios that would requiere you to manually modify your code in order to get it work in Snowpark. Some of those scenarios are listed below:
Scénario 1¶
Entrée
Vous trouverez ci-dessous un exemple de création d’UDFs dans un objet avec l’App Trait.
The Scala’s App trait simplifies creating executable programs by providing a main method that automatically runs the code within the object definition. Extending App delays the initialization of the fields until the main method is executed, which can affect the UDFs definitions if they rely on initialized fields. This means that if an object extends App and the udf references an object field, the udf definition uploaded to Snowflake will not include the initialized value of the field. This can result in null values being returned by the udf.
For example, in the following code the variable myValue will resolve to null in the udf definition:
Sortie
The SMA adds the EWI SPRKSCL1174 to the output code to let you know that the single-parameter udf function is supported in Snowpark but it requires manual intervention.
Correction recommandée
To avoid this issue, it is recommended to not extend App and implement a separate main method for your code. This ensure that object fields are initialized before udf definitions are created and uploaded to Snowflake.
For more details about this topic, see Caveat About Creating UDFs in an Object With the App Trait.
Scénario 2¶
Entrée
Vous trouverez ci-dessous un exemple de création d’UDFs dans les notebooks Jupyter.
Sortie
The SMA adds the EWI SPRKSCL1174 to the output code to let you know that the single-parameter udf function is supported in Snowpark but it requires manual intervention.
Correction recommandée
To create a udf in a Jupyter Notebook, you should define the implementation of your function in a class that extends Serializable. For example, you should manually convert it into this:
For more details about how to create UDFs in Jupyter Notebooks, see Creating UDFs in Jupyter Notebooks.
Recommandations supplémentaires¶
To learn more about how to create user-defined functions in Snowpark, please refer to the following documentation: Creating User-Defined Functions (UDFs) for DataFrames in Scala
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1000¶
Message: Source project spark-core version is *version number*, the spark-core version supported by snowpark is 2.12:3.1.2 so there may be functional differences between the existing mappings
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a version of the spark-core that is not supported by SMA. Therefore, there may be functional differences between the existing mappings and the output might have unexpected behaviors.
Recommandations supplémentaires¶
La version spark-core prise en charge par SMA est 2.12:3.1.2. Envisagez de changer la version de votre code source.
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1140¶
Message : org.apache.spark.sql.functions.stddev a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.stddev function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.stddev function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1140 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent stddev function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1111¶
Note
Ce code de problème est maintenant obsolète.
Message : CreateDecimalType n’est pas pris en charge.
Catégorie : Erreur de conversion
Description¶
This issue appears when the SMA detects a usage org.apache.spark.sql.types.DataTypes.CreateDecimalType function.
Scénario¶
Entrée
Vous trouverez ci-dessous un exemple d’utilisation de la fonction org.apache.spark.sql.types.DataTypes.CreateDecimalType.
Sortie
The SMA adds the EWI SPRKSCL1111 to the output code to let you know that CreateDecimalType function is not supported by Snowpark.
Correction recommandée
Il n’y a pas encore de correction recommandée.
Message : L’option du constructeur de la session Spark n’est pas prise en charge.
Catégorie : Erreur de conversion
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.SparkSession.Builder.config function, which is setting an option of the Spark Session and it is not supported by Snowpark.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.SparkSession.Builder.config function used to set an option in the Spark Session.
Sortie
The SMA adds the EWI SPRKSCL1104 to the output code to let you know config method is not supported by Snowpark. Then, it is not possible to set options in the Spark Session via config function and it might affects the migration of the Spark Session statement.
Correction recommandée
Pour créer la session, il est exigé d’ajouter la configuration Snowflake Snowpark appropriée.
Dans cet exemple, une variable configs est utilisée.
Il est également recommandé d’utiliser un fichier de configuration (profile.properties) contenant les informations relatives à la connexion :
And with the Session.builder.configFile the session can be created:
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1101¶
This issue code has been deprecated since Spark Conversion Core 2.3.22
Message : La diffusion n’est pas prise en charge
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.broadcast function, which is not supported by Snowpark. This function is not supported because Snowflake does not support broadcast variables.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.broadcast function used to create a broadcast object to use on each Spark cluster:
Sortie
The SMA adds the EWI SPRKSCL1101 to the output code to let you know that this function is not supported by Snowpark.
Correction recommandée
Snowflake gère le stockage et la charge de travail sur les clusters, ce qui rend les objets de diffusion inapplicables. Cela signifie que le recours à la diffusion peut ne pas être nécessaire du tout, mais chaque cas doit faire l’objet d’une analyse plus approfondie.
The recommended approach is replace a Spark dataframe broadcast by a Snowpark regular dataframe or by using a dataframe method as Join.
For the proposed input the fix is to adapt the join to use directly the dataframe collegeDF without the use of broadcast for the dataframe.
Recommandations supplémentaires¶
The Snowflake’s architecture guide provides insight about Snowflake storage management.
Snowpark Dataframe reference could be useful in how to adapt a particular broadcast scenario.
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1150¶
Message : org.apache.spark.sql.functions.var_pop a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.var_pop function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.var_pop function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1150 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent var_pop function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
description : >- Le paramètre de la fonction org.apache.spark.sql.DataFrameReader.option n’est pas défini.
SPRKSCL1164¶
Note
Ce code de problème est maintenant obsolète.
Message : Le paramètre n’est pas défini pour org.apache.spark.sql.DataFrameReader.option
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects that giving parameter of org.apache.spark.sql.DataFrameReader.option is not defined.
Scénario¶
Entrée
Below is an example of undefined parameter for org.apache.spark.sql.DataFrameReader.option function.
Sortie
The SMA adds the EWI SPRKSCL1164 to the output code to let you know that giving parameter to the org.apache.spark.sql.DataFrameReader.option function is not defined.
Correction recommandée
Check the Snowpark documentation for reader format option here, in order to identify the defined options.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1135¶
Avertissement
This issue code is deprecated since Spark Conversion Core 4.3.2
Message : org.apache.spark.sql.functions.mean a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.mean function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.mean function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1135 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent mean function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1115¶
Avertissement
This issue code has been deprecated since Spark Conversion Core Version 4.6.0
Message : org.apache.spark.sql.functions.round a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.round function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.round function that generates this EWI.
Sortie
The SMA adds the EWI SPRKSCL1115 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent round function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a column object and a scale, you can convert the scale into a column object using the com.snowflake.snowpark.functions.lit function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1144¶
Message : La table des symboles n’a pas pu être chargée
Catégorie : Erreur d’analyse
Description¶
Ce problème apparaît en cas d’erreur critique dans le processus d’exécution de SMA. La table des symboles ne pouvant être chargée, l’outil SMA ne peut pas lancer le processus d’évaluation ou de conversion.
Recommandations supplémentaires¶
This is unlikely to be an error in the source code itself, but rather is an error in how the SMA processes the source code. The best resolution would be to post an issue in the SMA.
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1170¶
Note
Ce code de problème est maintenant obsolète.
Message : la clé membre de sparkConfig n’est pas prise en charge par la clé spécifique à la plateforme.
Catégorie : Erreur de conversion
Description¶
Si vous utilisez une version plus ancienne, veuillez la mettre à jour.
Recommandations supplémentaires¶
Mettez votre application à jour vers la dernière version.
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1121¶
Message : org.apache.spark.sql.functions.atan a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.atan function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.atan function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1121 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent atan function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1131¶
Message : org.apache.spark.sql.functions.grouping a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.grouping function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.grouping function, first used with a column name as an argument and then with a column object.
Sortie
The SMA adds the EWI SPRKSCL1131 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent grouping function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1160¶
Note
This issue code has been deprecated since Spark Conversion Core 4.1.0
Message : org.apache.spark.sql.functions.sum a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.sum function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.sum function that generates this EWI. In this example, the sum function is used to calculate the sum of selected column.
Sortie
The SMA adds the EWI SPRKSCL1160 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent sum function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1154¶
Message : org.apache.spark.sql.functions.ceil a une solution de contournement. Voir la documentation pour plus d’informations
Catégorie : Avertissement
Description¶
This issue appears when the SMA detects a use of the org.apache.spark.sql.functions.ceil function, which has a workaround.
Scénario¶
Entrée
Below is an example of the org.apache.spark.sql.functions.ceil function, first used with a column name as an argument, then with a column object and finally with a column object and a scale.
Sortie
The SMA adds the EWI SPRKSCL1154 to the output code to let you know that this function is not fully supported by Snowpark, but it has a workaround.
Correction recommandée
Snowpark has an equivalent ceil function that receives a column object as an argument. For that reason, the Spark overload that receives a column object as an argument is directly supported by Snowpark and does not require any changes.
For the overload that receives a string argument, you can convert the string into a column object using the com.snowflake.snowpark.functions.col function as a workaround.
For the overload that receives a column object and a scale, you can use the callBuiltin function to invoke the Snowflake builtin CEIL function. To use it, you should pass the string « ceil » as the first argument, the column as the second argument and the scale as the third argument.
Recommandations supplémentaires¶
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.
SPRKSCL1105¶
Ce code de problème est maintenant obsolète.
Message : La valeur du format du rédacteur n’est pas prise en charge.
Catégorie : Erreur de conversion
Description¶
This issue appears when the org.apache.spark.sql.DataFrameWriter.format has an argument that is not supported by Snowpark.
Scénarios¶
There are some scenarios depending on the type of format you are trying to save. It can be a supported, or non-supported format.
Scénario 1¶
Entrée
L’outil analyse le type de format que vous essayez d’enregistrer. Les formats pris en charge sont les suivants :
csvjsonorcparquettext
Sortie
The tool transforms the format method into a csv method call when save function has one parameter.
Correction recommandée
Dans ce cas, l’outil n’affiche pas l’EWI, ce qui signifie qu’aucune correction n’est nécessaire.
Scénario 2¶
Entrée
The below example shows how the tool transforms the format method when passing a net.snowflake.spark.snowflake value.
Sortie
The tool shows the EWI SPRKSCL1105 indicating that the value net.snowflake.spark.snowflake is not supported.
Correction recommandée
For the not supported scenarios there is no specific fix since it depends on the files that are trying to be read.
Scénario 3¶
Entrée
The below example shows how the tool transforms the format method when passing a csv, but using a variable instead.
Sortie
Since the tool can not determine the value of the variable in runtime, shows the EWI SPRKSCL1163 indicating that the value is not supported.
Correction recommandée
As a workaround, you can check the value of the variable and add it as a string to the format call.
Recommandations supplémentaires¶
The Snowpark location only accepts cloud locations using a snowflake stage.
The documentation of methods supported by Snowpark can be found in the documentation
For more support, you can email us at sma-support@snowflake.com or post an issue in the SMA.