SnowConvert: Teradata Built-in Functions

Note

This page only lists the functions that are already transformed by SnowConvert, if a function from the Teradata documentation is not listed there then it should be taken as unsupported.

Note

Some Teradata functions do not have a direct equivalent in Snowflake so they are transformed into a functional equivalent UDF, these can be easily spotted by the _UDF postfix in the name of the function. For more information on the UDFs SnowConvert uses check this git repository.

Aggregate Functions

Teradata

Snowflake

Note

AVG

AVG

CORR

CORR

COUNT

COUNT

COVAR_POP

COVAR_POP

COVAR_SAMP

COVAR_SAMP

GROUPING

GROUPING

KURTOSIS

KURTOSIS

MAXIMUM

MAX

MAX

MINIMUM

MIN

MIN

PIVOT

PIVOT

Check PIVOT.

REGR_AVGX

REGR_AVGX

REGR_AVGY

REGR_AVGY

REGR_COUNT

REGR_COUNT

REGR_INTERCEPT

REGR_INTERCEPT

REGR_R2

REGR_R2

REGR_SLOPE

REGR_SLOPE

REGR_SXX

REGR_SXX

REGR_SXY

REGR_SXY

REGR_SYY

REGR_SYY

SKEW

SKEW

STDDEV_POP

STDDEV_POP

STDDEV_SAMP

STDDEV_SAMP

SUM

SUM

UNPIVOT

UNPIVOT

Unpivot with multiple functions
not supported in Snowflake

VAR_POP

VAR_POP

VAR_SAMP

VAR_SAMP

See Aggregate functions​

Arithmetic, Trigonometric, Hyperbolic Operators/Functions

Teradata

Snowflake

Note

ABS

ABS

CEILING

CEIL

DEGREES

DEGREES

EXP

EXP

FLOOR

FLOOR

HYPERBOLIC

ACOSH

ASINH

ATANH

COSH

SINH

TANH

HYPERBOLIC

ACOSH

ASINH

ATANH

COSH

SINH

TANH

LOG

LOG

LN

LN

MOD

MOD

NULLIFZERO(param)

CASE WHEN param=0 THEN null ELSE param END

POWER

POWER

RANDOM

RANDOM

RADIANS

RADIANS

ROUND

ROUND

SIGN

SIGN

SQRT

SQRT

TRUNC

TRUNC_UDF

TRIGONOMETRIC

ACOS

ASIN

ATAN

ATAN2

COS

SIN

TAN

TRIGONOMETRIC

ACOS

ASIN

ATAN

ATAN2

COS

SIN

TAN

ZEROIFNULL

ZEROIFNULL

See Arithmetic, Trigonometric, Hyperbolic Operators/Functions

Attribute Functions

Teradata

Snowflake

Note

BIT_LENGTH

BIT_LENGTH

BYTE

BYTES

LENGTH

CHAR

CHARS

CHARACTERS

LEN

CHAR_LENGTH

CHARACTER_LENGTH

LEN

MCHARACTERS

LENGTH

OCTECT_LENGTH

OCTECT_LENGTH

See Attribute functions

Bit/Byte Manipulation Functions

Teradata

Snowflake

Note

BITAND

BITAND

BITNOT

BITNOT

BITOR

BITOR

BITXOR

BITXOR

GETBIT

GETBIT

See Bit/Byte functions

Built-In (System Functions)

Teradata

Snowflake

Note

ACCOUNT

CURRENT_ACCOUNT

CURRENT_DATE

CURDATE

CURRENT_DATE

CURRENT_ROLE

CURRENT_ROLE

CURRENT_TIME
CURTIME

CURRENT_TIME

CURRENT_TIMESTAMP

CURRENT_TIMESTAMP

DATABASE

CURRENT_DATABASE

DATE

CURRENT_DATE

NOW

CURRENT_TIMESTAMP

PROFILE

CURRENT_ROLE

SESSION

CURRENT_SESSION

TIME

CURRENT_TIME

USER

CURRENT_USER

See Built-In Functions

Business Calendars

Teradata

Snowflake

Note

DAYNUMBER_OF_MONTH(DatetimeValue, ‘COMPATIBLE’)

DAYOFMONTH

DAYNUMBER_OF_MONTH(DatetimeValue, ‘ISO’)

DAYNUMBER_OF_MONTH_ISO_UDF

DAYNUMBER_OF_MONTH(DatetimeValue, ‘TERADATA’)

DAYOFMONTH

DAYNUMBER_OF_WEEK(DatetimeValue, ‘ISO’)

DAYOFWEEKISO

DAYNUMBER_OF_WEEK(DatetimeValue, ‘COMPATIBLE’)

DAY_OF_WEEK_COMPATIBLE_UDF

DAYNUMBER_OF_WEEK(DatetimeValue, ‘TERADATA’)
DAYNUMBER_OF_WEEK(DatetimeValue)

TD_DAY_OF_WEEK_UDF

DAYNUMBER_OF_YEAR(DatetimeValue, ‘ISO’)

PUBLIC.DAY_OF_YEAR_ISO_UDF

DAYNUMBER_OF_YEAR(DatetimeValue)

DAYOFYEAR

QUARTERNUMBER_OF_YEAR

QUARTER

TD_SUNDAY(DateTimeValue)

PREVIOUS_DAY(DateTimeValue, ‘Sunday’)

WEEKNUMBER_OF_MONTH

WEEKNUMBER_OF_MONTH_UDF

WEEKNUMBER_OF_QUARTER(dateTimeValue)

WEEKNUMBER_OF_QUARTER_UDF

WEEKNUMBER_OF_QUARTER(dateTimeValue, ‘ISO’)

WEEKNUMBER_OF_QUARTER_ISO_UDF

WEEKNUMBER_OF_QUARTER(dateTimeValue, ‘COMPATIBLE’)

WEEKNUMBER_OF_QUARTER_COMPATIBLE_UDF

WEEKNUMBER_OF_YEAR(DateTimeValue, ‘ISO’)

WEEKISO

YEARNUMBER_OF_CALENDAR(DATETIMEVALUE, ‘COMPATIBLE’)

YEAR

YEARNUMBER_OF_CALENDAR(DATETIMEVALUE, ‘ISO’)

YEAROFWEEKISO

See Business Calendars

Calendar Functions

Teradata

Snowflake

Note

DAYNUMBER_OF_WEEK(DatetimeValue)

TD_DAY_OF_WEEK_UDF

DAYNUMBER_OF_WEEK(DatetimeValue, ‘COMPATIBLE’)

DAY_OF_WEEK_COMPATIBLE_UDF

QuarterNumber_Of_Year(DatetimeValue, ‘ISO’)

QUARTER_OF_YEAR_ISO_UDF(DatetimeValue)

TD_DAY_OF_CALENDAR

TD_DAY_OF_CALENDAR_UDF

TD_DAY_OF_MONTH
DAYOFMONTH

DAYOFMONTH

TD_DAY_OF_WEEK
DAYOFWEEK

TD_DAY_OF_WEEK_UDF

TD_DAY_OF_YEAR

DAYOFYEAR

TD_MONTH_OF_CALENDAR(DateTimeValue)
MONTH_CALENDAR(DateTimeValue)

TD_MONTH_OF_CALENDAR_UDF(DateTimeValue)

TD_WEEK_OF_CALENDAR(DateTimeValue)
WEEK_OF_CALENDAR(DateTimeValue)

TD_WEEK_OF_CALENDAR_UDF(DateTimeValue)

TD_WEEK_OF_YEAR

WEEK_OF_YEAR_UDF

TD_YEAR_BEGIN(DateTimeValue)

YEAR_BEGIN_UDF(DateTimeValue)

TD_YEAR_BEGIN(DateTimeValue, ‘ISO’)

YEAR_BEGIN_ISO_UDF(DateTimeValue)

TD_YEAR_END(DateTimeValue)

YEAR_END_UDF(DateTimeValue)

TD_YEAR_END(DateTimeValue, ‘ISO’)

YEAR_END_ISO_UDF(DateTimeValue)

WEEKNUMBER_OF_MONTH(DateTimeValue)

WEEKNUMBER_OF_MONTH_UDF(DateTimeValue)

WEEKNUMBER_OF_QUARTER(DateTimeValue)

WEEKNUMBER_OF_QUARTER_UDF(DateTimeValue)

WEEKNUMBER_OF_QUARTER(DateTimeValue, ‘ISO’)

WEEKNUMBER_OF_QUARTER_ISO_UDF(DateTimeValue)

WEEKNUMBER_OF_QUARTER(DateTimeValue, ‘COMPATIBLE’)

WEEKNUMBER_OF_QUARTER_COMPATIBLE_UDF(DateTimeValue)

WEEKNUMBER_OF_YEAR(DateTimeValue)

WEEK_OF_YEAR_UDF(DateTimeValue)

WEEKNUMBER_OF_YEAR(DateTimeValue, ‘COMPATIBLE’)

WEEK_OF_YEAR_COMPATIBLE_UDF(DateTimeValue)

See Calendar Functions

Case Functions

Teradata

Snowflake

Note

COALESCE

COALESCE

Check Coalesce.

NULLIF

NULLIF

See case functions

Comparison Functions

Teradata

Snowflake

Note

DECODE

DECODE

GREATEST

GREATEST

LEAST

LEAST

See comparison functions

Data type conversions

Teradata

Snowflake

Note

CAST

CAST

CAST(DatetimeValue AS INT)

DATE_TO_INT_UDF

CAST (VarcharValue AS INTERVAL)

INTERVAL_UDF

Check Cast to INTERVAL datatype

TRYCAST

TRY_CAST

FROM_BYTES

TO_NUMBER
TO_BINARY

FROM_BYTES with ASCII parameter not supported in Snowflake.

See Data Type Conversions

Data Type Conversion Functions

Teradata

Snowflake

Note

TO_BYTES(Input, ‘Base10’)

INT2HEX_UDF(Input)

TO_NUMBER

TO_NUMBER

TO_CHAR

TO_CHAR or equivalent expression

Check TO_CHAR.

TO_DATE

TO_DATE

TO_DATE(input, ‘YYYYDDD’)

JULIAN_TO_DATE_UDF

See Data Type Conversion Functions

DateTime and Interval functions

Teradata

Snowflake

Note

ADD_MONTHS

ADD_MONTHS

EXTRACT

EXTRACT

LAST_DAY

LAST_DAY

MONTH

MONTH

MONTHS_BETWEEN

MONTHS_BETWEEN_UDF

NEXT_DAY

NEXT_DAY

OADD_MONTHS

ADD_MONTHS

ROUND(Numeric)

ROUND

ROUND(Date)

ROUND_DATE_UDF

TRUNC(Date)

TRUNC_UDF

YEAR

YEAR

See DateTime and Interval Functions and Expressions

Hash functions

Teradata

Snowflake

Note

HASH_MD5

MD5

HASHAMP

HASHBACKAM

HASHBUCKET

HASHROW

Not supported

Check notes on the architecture differences between Teradata and Snowflake

See Hash functions

JSON functions

Teradata

Snowflake

Note

NEW JSON

TO_JSON(PARSE_JSON())

Check NEW JSON

JSON_CHECK

CHECK_JSON

Check JSON_CHECK

JSON_TABLE

Equivalent query

Check JSON_TABLE

JSONExtract

JSONExtractValue
JSONExtractLargeValue

JSON_EXTRACT_UDF

Check JSON_EXTRACT

See JSON documentation

Null-Handling functions

Teradata

Snowflake

Note

NVL

NVL

NVL2

NVL2

See Null-Handling functions

Ordered Analytical/Window Aggregate functions

Teradata

Snowflake

Note

CSUM(col1, col2)

SUM(col_1) OVER (PARTITION BY null ORDER BY col_2 ROWS UNBOUNDED PRECEDING)

CUME_DIST

CUME_DIST

DENSE_RANK

DENSE_RANK

FIRST_VALUE

FIRST_VALUE

LAG

LAG

LAST_VALUE

LAST_VALUE

LEAD

LEAD

MAVG(csales, 2, cdate, csales)

AVG(csales) OVER ( ORDER BY cdate, csales ROWS 1 PRECEDING)

MEDIAN

MEDIAN

MSUM(csales, 2, cdate, csales)

SUM(csales) OVER(ORDER BY cdate, csales ROWS 1 PRECEDING)

PERCENT_RANK

PERCENT_RANK

PERCENTILE_CONT

PERCENTILE_CONT

PERCENTILE_DISC

PERCENTILE_DISC

QUANTILE

QUANTILE

RANK

RANK

ROW_NUMBER

ROW_NUMBER

See Window functions

Period functions and operators

Teradata

Snowflake

Note

BEGIN

PERIOD_BEGIN_UDF

END

PERIOD_END_UDF

INTERVAL

TIMESTAMPDIFF

LAST

PERIOD_LAST_UDF

LDIFF

PERIOD_LDIFF_UDF

OVERLAPS

PUBLIC.PERIOD_OVERLAPS_UDF

PERIOD

PERIOD_UDF

PERIOD(datetimeValue, UNTIL_CHANGED)

PERIOD(datetimeValue, UNTIL_CLOSED)

PERIOD_UDF(datetimeValue, ‘9999-12-31 23:59:59.999999’)

See notes about ending bound constants

RDIFF

PERIOD_RDIFF_UDF

See Period Functions and Operators

Query band functions

Teradata

Snowflake

Note

GETQUERYBANDVALUE

GETQUERYBANDVALUE_UDF

Check GETQUERYBANDVALUE

See Query band functions

Regex functions

Teradata

Snowflake

Note

REGEXP_INSTR

REGEXP_INSTR

Check Regex functions

REGEXP_REPLACE

REGEXP_REPLACE

Check Regex functions

REGEXP_SIMILAR

REGEXP_LIKE

Check Regex functions

REGEXP_SUBSTR

REGEXP_SUBSTR

Check Regex functions

See Regex functions

String operators and functions

Teradata

Snowflake

Note

ASCII

ASCII

CHAR2HEXINT

CHAR2HEXINT_UDF

CHR

CHR/CHAR

CHAR_LENGTH

LEN

CONCAT

CONCAT

EDITDISTANCE

EDITDISTANCE

INDEX

CHARINDEX

Check notes about implicit conversion

INITCAP

INITCAP

INSTR

REGEXP_INSTR

INSTR(StringValue, StringValue ,NumericNegativeValue, NumericValue)

INSTR_UDF(StringValue, StringValue ,NumericNegativeValue, NumericValue)

LEFT

LEFT

LENGTH

LENGTH

LOWER

LOWER

LPAD

LPAD

LTRIM

LTRIM

OREPLACE

REPLACE

OTRANSLATE

TRANSLATE

POSITION

POSITION

Check notes about implicit conversion

REVERSE

REVERSE

RIGHT

RIGHT

RPAD

RPAD

RTRIM

RTRIM

SOUNDEX

SOUNDEX_P123

STRTOK

STRTOK

STRTOK_SPLIT_TO_TABLE

STRTOK_SPLIT_TO_TABLE

Check Strtok_split_to_table

SUBSTRING

SUBSTR/SUBSTR_UDF

Check Substring

TRANSLATE_CHK

TRANSLATE_CHK_UDF

TRIM(LEADING ‘0’ FROM aTABLE)

LTRIM(aTABLE, ‘0’)

TRIM(TRAILING ‘0’ FROM aTABLE)

RTRIM(aTABLE, ‘0’)

TRIM(BOTH ‘0’ FROM aTABLE)

TRIM(aTABLE, ‘0’)

TRIM(CAST(numericValue AS FORMAT ‘999’))

LPAD(numericValue, 3, 0)

UPPER

UPPER

See String operators and functions​​​

St_Point functions

Teradata

Snowflake

Note

ST_SPHERICALDISTANCE

HAVERSINE
ST_DISTANCE

See St_Point functions

Table operators

Teradata

Snowflake

Note

TD_UNPIVOT

Equivalent query

Check Td_unpivot

See Table Operators

XML functions

Teradata

Snowflake

Note

XMLAGG

LISTAGG

Check Xmlagg

XMLQUERY

Not Supported

See XML functions

Extensibility UDFs

This section contains UDFs and other extensibility functions that are not offered as system built-in functions by Teradata but are transformed by SnowConvert

Teradata

Snowflake

Note

CHKNUM

CHKNUM_UDF

Check this UDF download page

Notes

Architecture differences between Teradata and Snowflake

Teradata has a shared-nothing architecture with Access Module Processors (AMP) where each AMP manages their own share of disk storage and is accessed through hashing when doing queries. To take advantage of parallelism the stored information should be evenly distributed among AMPs and to do this Teradata offers a group of hash-related functions that can be used to determine how good the actual primary indexes are.

On the other hand, Snowflake architecture is different, and it manages how the data is stored on its own, meaning users do not need to worry about optimizing their data distribution.

Ending bound constants (UNTIL_CHANGED and UNTIL_CLOSED)

Both UNTIL_CHANGED and UNTIL_CLOSED are Teradata constants that represent an undefined ending bound for periods. Internally, these constants are represented as the maximum value a timestamp can have i.e ‘9999-12-31 23:59:59.999999’. During the migration of the PERIOD function, the ending bound is checked if present to determine if it is one of these constants and to replace it with varchar of value ‘9999-12-31 23:59:59.999999’ in case it is, Snowflake then casts the varchar to date or timestamp depending on the type of the beginning bound when calling PERIOD___UDF.

Implicit conversion

Some Teradata string functions like INDEX or POSITION accept non-string data types and implicitly convert them to string, this can cause inconsistencies in the results of those functions between Teradata and Snowflake. For example, the following Teradata code:

 SELECT INDEX(35, '5');
Copy

Returns 4, while the CHARINDEX equivalent in Snowflake:

 SELECT CHARINDEX('5', 35);
Copy

Returns 2, this happens because Teradata has its own default formats which are used during implicit conversion. In the above example, Teradata interprets the numeric constant 35 as BYTEINT and uses BYTEINT default format'-999' for the implicit conversion to string, causing the converted value to be ' 35'. On the other hand, Snowflake uses its own default formats, creating inconsistencies in the result.

To solve this, the following changes are done to those function parameters:

  • If the parameter does not have a cast with format, then a snowflakeTO_VARCHARfunction with the default Teradata format equivalent in Snowflake is added instead.

  • If the parameter does have a cast with format, then the format is converted to its Snowflake equivalent and theTO_VARCHARfunction is added.

    • As a side note, Teradata ignores the sign of a number if it is not explicitly put inside a format, while Snowflake always adds spaces to insert the sign even when not specified, for those cases a check is done to see if the sign was specified and to remove it from the Snowflake string in case it was not.

After these changes, the resulting code would be:

 SELECT CHARINDEX( '5', TO_VARCHAR(35, 'MI999'));
Copy

Which returns 4, the same as the Teradata code.

Known Issues

No issues were found.

Cast from Number Datatypes to Varchar Datatype

Teradata when casts to varchar uses default formats for each number datatype, so Snowconvert adds formats to keep the equivalence among platforms.

Sample Source Patterns

BYTEINT

Teradata
 SELECT '"'||cast(cast(12 as BYTEINT) as varchar(10))||'"';
Copy
(('"'||12)||'"')|
----------------+
"12"            |

Copy
Snowflake
 SELECT
'"'|| LEFT(TO_VARCHAR(cast(12 as BYTEINT), 'TM'), 10) ||'"';
Copy
"'""'|| LEFT(TO_VARCHAR(CAST(12 AS BYTEINT), 'TM'), 10) ||'""'"
---------------------------------------------------------------
"12"            

Copy

SMALLINT

Teradata
 SELECT '"'||cast(cast(123 as SMALLINT) as varchar(10))||'"';
Copy
(('"'||123)||'"')|
-----------------+
"123"            |

Copy
Snowflake
 SELECT
'"'|| LEFT(TO_VARCHAR(CAST(123 AS SMALLINT), 'TM'), 10) ||'"';
Copy
"'""'|| LEFT(TO_VARCHAR(CAST(123 AS SMALLINT), 'TM'), 10) ||'""'"
-----------------------------------------------------------------
"123"
Copy

INTEGER

Teradata
 SELECT '"'||cast(cast(12345 as INTEGER) as varchar(10))||'"';
Copy
(('"'||12345)||'"')|
-------------------+
"12345"            |

Copy
Snowflake
 SELECT
'"'|| LEFT(TO_VARCHAR(CAST(12345 AS INTEGER), 'TM'), 10) ||'"';
Copy
"'""'|| LEFT(TO_VARCHAR(CAST(12345 AS INTEGER), 'TM'), 10) ||'""'"
------------------------------------------------------------------
"12345"

Copy

BIGINT

Teradata
 SELECT '"'||cast(cast(12345 as BIGINT) as varchar(10))||'"';
Copy
(('"'||12345)||'"')|
-------------------+
"12345"            |

Copy
Snowflake
 SELECT
       '"'|| LEFT(TO_VARCHAR(CAST(12345 AS BIGINT), 'TM'), 10) ||'"';
Copy
"'""'|| LEFT(TO_VARCHAR(CAST(12345 AS BIGINT), 'TM'), 10) ||'""'"
-----------------------------------------------------------------
"12345"

Copy

DECIMAL[(n[,m])] or NUMERIC[(n[,m])]

Teradata
 SELECT '"'||cast(cast(12345 as DECIMAL) as varchar(10))||'"',
       '"'||cast(cast(12345 as DECIMAL(12, 2)) as varchar(10))||'"';
Copy
(('"'||12345)||'"')|(('"'||12345)||'"')|
-------------------+-------------------+
"12345."           |"12345.00"         |

Copy
Snowflake
 SELECT
'"'|| LEFT(TO_VARCHAR(CAST(12345 AS DECIMAL), 'TM.'), 10) ||'"',
'"'|| LEFT(TO_VARCHAR(CAST(12345 AS DECIMAL(12, 2)), 'TM'), 10) ||'"';
Copy
'"'|| LEFT(TO_VARCHAR(CAST(12345 AS DECIMAL), 'TM.'), 10) ||'"'	'"'|| LEFT(TO_VARCHAR(CAST(12345 AS DECIMAL(12, 2)), 'TM'), 10) ||'"'
"12345."	"12345.00"    
Copy

Known Issues

  • Teradata treats the numbers between 0 and 1 differently than Snowflake. For those values, Teradata does not add the zero before the dot; meanwhile, Snowflake does.

Teradata

 SELECT '"'||cast(cast(-0.1 as DECIMAL(12, 2)) as varchar(10))||'"' AS column1,
       '"'||cast(cast(0.1 as DECIMAL(12, 2)) as varchar(10))||'"' AS column2;
Copy
COLUMN1          |COLUMN2
-----------------+--------------+
"-.10"           |".10"         |

Copy
Snowflake
 SELECT
'"'|| LEFT(TO_VARCHAR(CAST(-0.1 AS DECIMAL(12, 2)), 'TM'), 10) ||'"' AS column1,
'"'|| LEFT(TO_VARCHAR(CAST(0.1 AS DECIMAL(12, 2)), 'TM'), 10) ||'"' AS column2;
Copy
COLUMN1           |COLUMN2
------------------+---------------+
"-0.10"           |"0.10"         |

Copy

Related EWIs

No related EWIs.

Cast to DATE using { }

Description

The following syntax casts a date-formatted string to DATE datatype by putting a d before the string definition inside curly braces.

 SELECT {d '1233-10-10'}
Copy

Sample Source Patterns

Cast to DATE using curly braces

Teradata

 SELECT * FROM RESOURCE_DETAILS where change_ts >= {d '2022-09-10'};
Copy

Snowflake

 SELECT
* FROM
PUBLIC.RESOURCE_DETAILS
where change_ts >= DATE('2022-09-10');
Copy

Cast to INTERVAL datatype

Description

Snowflake does not support the Interval data type, but it has INTERVAL constants that can be used in DateTime operations and other uses can be emulated using VARCHAR, SnowConvert will transform CAST functions to the INTERVAL datatype into an equivalent depending on the case:

  • When the value being casted is of type interval an UDF will be generated to produce the new interval equivalent as a string

  • When the value is a literal, an Snowflake interval constant will be generated if the cast is used in a datetime operation, otherwise a literal string will be generated

  • When the value is non-literal then a cast to string will be generated

Sample Source Patterns

Non-interval literals

Teradata
 SELECT TIMESTAMP '2022-10-15 10:30:00' + CAST ('12:34:56.78' AS INTERVAL HOUR(2) TO SECOND(2)) AS VARCHAR_TO_INTERVAL,
TIMESTAMP '2022-10-15 10:30:00' + CAST(-5 AS INTERVAL YEAR(4)) AS NUMBER_TO_INTERVAL,
CAST('07:00' AS INTERVAL HOUR(2) TO MINUTE) AS OUTSIDE_DATETIME_OPERATION;
Copy
VARCHAR_TO_INTERVAL | NUMBER_TO_INTERVAL | OUTSIDE_DATETIME_OPERATION |
--------------------+--------------------+----------------------------+
2022-10-15 23:04:56 |2017-10-15 10:30:00 | 7:00                       |

Copy
Snowflake
 SELECT
TIMESTAMP '2022-10-15 10:30:00' + INTERVAL '12 HOUR, 34 MINUTE, 56 SECOND, 780000 MICROSECOND' AS VARCHAR_TO_INTERVAL,
TIMESTAMP '2022-10-15 10:30:00' + INTERVAL '-5 YEAR' AS NUMBER_TO_INTERVAL,
'07:00' AS OUTSIDE_DATETIME_OPERATION;
Copy
VARCHAR_TO_INTERVAL     | NUMBER_TO_INTERVAL     | OUTSIDE_DATETIME_OPERATION |
------------------------+------------------------+----------------------------+
2022-10-15 23:04:56.780 |2017-10-15 10:30:00.000 | 07:00                      |

Copy

Non-literal and non-interval values

Teradata
 SELECT TIMESTAMP '2022-10-15 10:30:00' + CAST('20 ' || '10' AS INTERVAL DAY TO HOUR) AS DATETIME_OPERATION,
CAST('20 ' || '10' AS INTERVAL DAY TO HOUR) AS OUTSIDE_DATETIME_OPERATION;
Copy
DATETIME_OPERATION  | OUTSIDE_DATETIME_OPERATION |
--------------------+----------------------------+
2022-11-04 20:30:00 | 20 10                      |

Copy
Snowflake
 SELECT
PUBLIC.DATETIMEINTERVALADD_UDF(TIMESTAMP '2022-10-15 10:30:00', CAST('20 ' || '10' AS VARCHAR(21)), 'DAY', '+') AS DATETIME_OPERATION,
CAST('20 ' || '10' AS VARCHAR(21)) AS OUTSIDE_DATETIME_OPERATION;
Copy
DATETIME_OPERATION      | OUTSIDE_DATETIME_OPERATION |
------------------------+----------------------------+
2022-11-04 20:30:00.000 | 20 10                      |

Copy

Cast of interval to another interval

Teradata
 SELECT
TIMESTAMP '2022-10-15 10:30:00' + CAST(INTERVAL '5999' MINUTE AS INTERVAL DAY TO HOUR) AS DATETIME_OPERATION,
CAST(INTERVAL '5999' MINUTE AS INTERVAL DAY TO HOUR) AS OUTSIDE_DATETIME_OPERATION;
Copy
DATETIME_OPERATION  | OUTSIDE_DATETIME_OPERATION |
--------------------+----------------------------+
2022-10-19 13:30:00 | 4 03                       |

Copy
Snowflake
 SELECT
PUBLIC.DATETIMEINTERVALADD_UDF(
TIMESTAMP '2022-10-15 10:30:00', PUBLIC.INTERVALTOINTERVAL_UDF('5999', 'MINUTE', 'MINUTE', 'DAY', 'HOUR'), 'DAY', '+') AS DATETIME_OPERATION,
PUBLIC.INTERVALTOINTERVAL_UDF('5999', 'MINUTE', 'MINUTE', 'DAY', 'HOUR') AS OUTSIDE_DATETIME_OPERATION;
Copy
DATETIME_OPERATION      | OUTSIDE_DATETIME_OPERATION |
------------------------+----------------------------+
2022-10-19 13:30:00.000 | 4 03                       |

Copy

Known Issues

No known issues.

Related EWIs

No related EWIs.

COALESCE

Description

The coalesce function is used to return the first non-null element in a list. For more information check COALESCE.

COALESCE(element_1, element_2 [, element_3, ..., element_n])

Copy

Both Teradata and Snowflake COALESCE functions allow mixing numeric with string and date with timestamp parameters. However, they handle these two cases differently:

  • Numeric along with string parameters: Teradata converts all numeric parameters to varchar while Snowflake does the opposite

  • Timestamp along with date parameters: Teradata converts all timestamps to date while Snowflake does the opposite

To ensure functional equivalence in the first case, all numeric parameters are cast tostringusingto_varcharfunction, this takes the format of the numbers into account. In the second case, all timestamps are casted to date using to_date, Teradata ignores the format of timestamps when casting them so it is removed during transformation.

Sample Source Patterns

Numeric mixed with string parameters

Teradata
  SELECT COALESCE(125, 'hello', cast(850 as format '-999'));
Copy
COLUMN1|
-------+
125    |

Copy
Snowflake
 SELECT
 COALESCE(TO_VARCHAR(125), 'hello', TO_VARCHAR(850, '9000'));
Copy
COLUMN1|
-------+
125    |

Copy

Timestamp mixed with date parameters

Teradata
 SELECT COALESCE(cast(TIMESTAMP '2021-09-14 10:14:59' as format 'HH:MI:SSBDD-MM-YYYY'), current_date);
Copy
COLUMN1    |
-----------+
2021-09-14 |

Copy
Snowflake
 SELECT
 COALESCE(TO_DATE(TIMESTAMP '2021-09-14 10:14:59' !!!RESOLVE EWI!!! /*** SSC-EWI-TD0025 - OUTPUT FORMAT 'HH:MI:SSBDD-MM-YYYY' NOT SUPPORTED. ***/!!!), CURRENT_DATE());
Copy
COLUMN1    |
-----------+
2021-09-14 |

Copy

Known Issues

No known issues_._

Related EWIs

CURRENT_TIMESTAMP

Severity

Low

Description

Fractional seconds are only displayed if it is explicitly set in the TIME_OUTPUT_FORMAT session parameter.

Input code:
 SELECT current_timestamp(4) at local;
Copy
Output code:
 SELECT
current_timestamp(4);
Copy

Recommendations

  • Check if the TIME_OUTPUT___FORMAT session parameter is set to get the behavior that you want.

  • If you need more support, you can email us at snowconvert-support@snowflake.com

Known Issues

No issues were found.

Related EWIs

No related EWIs.

DAYNUMBER_OF_MONTH

Description

Returns the number of days elapsed from the beginning of the month to the given date. For more information check DAYNUMBER_OF_MONTH.

DAYNUMBER_OF_MONTH(expression [, calendar_name])

Copy

Both Teradata and Snowflake handle the DAYNUMBER_OF_MONTH function in the same way, except in one case:

  • The ISO calendar: An ISO month has 4 or 5 complete weeks. For more information check About ISO Computation.

To ensure functional equivalence, a user-defined function (UDF) is added for the ISO calendar case.

Sample Source Patterns

Teradata

 SELECT
    DAYNUMBER_OF_MONTH (DATE'2022-12-22'),
    DAYNUMBER_OF_MONTH (DATE'2022-12-22', NULL),
    DAYNUMBER_OF_MONTH (DATE'2022-12-22', 'Teradata'),
    DAYNUMBER_OF_MONTH (DATE'2022-12-22', 'COMPATIBLE');
Copy
COLUMN1|COLUMN2|COLUMN3|COLUMN4|
-------+-------+-------+-------+
22     |22     |22     |22     |

Copy
Snowflake
 SELECT
    DAYOFMONTH(DATE'2022-12-22'),
    DAYOFMONTH(DATE'2022-12-22'),
    DAYOFMONTH(DATE'2022-12-22'),
    DAYOFMONTH(DATE'2022-12-22');
Copy
COLUMN1|COLUMN2|COLUMN3|COLUMN4|
-------+-------+-------+-------+
22     |22     |22     |22     |

Copy

ISO calendar

Teradata
 SELECT DAYNUMBER_OF_MONTH (DATE'2022-12-22', 'ISO');
Copy
COLUMN1|
-------+
25     |

Copy
Snowflake
 SELECT
PUBLIC.DAYNUMBER_OF_MONTH_UDF(DATE'2022-12-22');
Copy
COLUMN1|
-------+
25     |

Copy

Known Issues

No issues were found.

Related EWIs

No related EWIs.

FROM_BYTES

Description

The FROM_BYTES function encodes a sequence of bits into a sequence of characters representing its encoding. For more information check FROM_BYTES(Encoding).

Snowflake does not have support for FROM_BYTES function, however, some workarounds can be done for the most common occurrences of this function.

Sample Source Patterns

Teradata

 SELECT 
FROM_BYTES('5A1B'XB, 'base10'), --returns '23067'
FROM_BYTES('5A3F'XB, 'ASCII'), --returns 'Z\ESC '
FROM_BYTES('5A1B'XB, 'base16'); -- returns '5A1B'
Copy
COLUMN1    | COLUMN2    | COLUMN3 |
-----------+------------+---------+
23067      |  Z\ESC     | 5A1B    |

Copy
Snowflake
 SELECT
--returns '23067'
TO_NUMBER('5A1B', 'XXXX'),
--returns 'Z\ESC '
!!!RESOLVE EWI!!! /*** SSC-EWI-0031 - FROM_BYTES FUNCTION NOT SUPPORTED ***/!!!
FROM_BYTES(TO_BINARY('5A3F'), 'ASCII'),
TO_BINARY('5A1B', 'HEX'); -- returns '5A1B'
Copy
COLUMN1    | COLUMN2    | COLUMN3 |
-----------+------------+---------+
23067      |  Z\ESC     | 5A1B    |

Copy

Note

Some parts in the output code are omitted for clarity reasons.

Known Issues

  1. TO_NUMBER format parameter must match with the digits on the input string.

  2. There is no functional equivalent built-in function for FROM_BYTES when encoding to ANSI

Related EWIs

  1. SSC-EWI-0031: FUNCTION NOT SUPPORTED

GETQUERYBANDVALUE

Description

The GetQueryBandValue function searches a name key inside of the query band and returns its associated value if present. It can be used to search inside the transaction, session, profile, or any of the key-value pairs of the query band.

For more information on this function check GetQueryBandValue in the Teradata documentation.

[SYSLIB.]GetQueryBandValue([QueryBandIn,] SearchType, Name);

Copy

Sample Source Patterns

Setup data

Teradata
 SET QUERY_BAND = 'hola=hello;adios=bye;' FOR SESSION;
Copy
Snowflake
 ALTER SESSION SET QUERY_TAG = 'hola=hello;adios=bye;';
Copy

GetQueryBandValue with QueryBandIn parameter

Teradata
 SELECT
GETQUERYBANDVALUE('=T> user=Mark;account=Mark200; =S> user=Sara;account=SaraDB;role=DbAdmin =P> user=Peter;account=Peter3;', 0, 'account') as Example1,
GETQUERYBANDVALUE('=T> user=Mark;account=Mark200; =S> user=Sara;account=SaraDB;role=DbAdmin =P> user=Peter;account=Peter3;', 1, 'account') as Example2,
GETQUERYBANDVALUE('=T> user=Mark;account=Mark200; =S> user=Sara;account=SaraDB;role=DbAdmin =P> user=Peter;account=Peter3;', 2, 'account') as Example3,
GETQUERYBANDVALUE('=T> user=Mark;account=Mark200; =S> user=Sara;account=SaraDB;role=DbAdmin =P> user=Peter;account=Peter3;', 3, 'account') as Example4,
GETQUERYBANDVALUE('=T> user=Mark;account=Mark200; =S> user=Sara;account=SaraDB;role=DbAdmin =P> user=Peter;account=Peter3;', 0, 'role') as Example5,
GETQUERYBANDVALUE('=T> user=Mark;account=Mark200; =S> user=Sara;account=SaraDB;role=DbAdmin =P> user=Peter;account=Peter3;', 1, 'role') as Example6;
Copy
+----------+----------+----------+----------+----------+----------+
| EXAMPLE1 | EXAMPLE2 | EXAMPLE3 | EXAMPLE4 | EXAMPLE5 | EXAMPLE6 |
+----------+----------+----------+----------+----------+----------+
| Mark200  | Mark200  | SaraDB   | Peter3   | DbAdmin  |          |
+----------+----------+----------+----------+----------+----------+

Copy
Snowflake
 SELECT
GETQUERYBANDVALUE_UDF('=T> user=Mark;account=Mark200; =S> user=Sara;account=SaraDB;role=DbAdmin =P> user=Peter;account=Peter3;', 0, 'account') as Example1,
GETQUERYBANDVALUE_UDF('=T> user=Mark;account=Mark200; =S> user=Sara;account=SaraDB;role=DbAdmin =P> user=Peter;account=Peter3;', 1, 'account') as Example2,
GETQUERYBANDVALUE_UDF('=T> user=Mark;account=Mark200; =S> user=Sara;account=SaraDB;role=DbAdmin =P> user=Peter;account=Peter3;', 2, 'account') as Example3,
GETQUERYBANDVALUE_UDF('=T> user=Mark;account=Mark200; =S> user=Sara;account=SaraDB;role=DbAdmin =P> user=Peter;account=Peter3;', 3, 'account') as Example4,
GETQUERYBANDVALUE_UDF('=T> user=Mark;account=Mark200; =S> user=Sara;account=SaraDB;role=DbAdmin =P> user=Peter;account=Peter3;', 0, 'role') as Example5,
GETQUERYBANDVALUE_UDF('=T> user=Mark;account=Mark200; =S> user=Sara;account=SaraDB;role=DbAdmin =P> user=Peter;account=Peter3;', 1, 'role') as Example6;
Copy
+----------+----------+----------+----------+----------+----------+
| EXAMPLE1 | EXAMPLE2 | EXAMPLE3 | EXAMPLE4 | EXAMPLE5 | EXAMPLE6 |
+----------+----------+----------+----------+----------+----------+
| Mark200  | Mark200  | SaraDB   | Peter3   | DbAdmin  |          |
+----------+----------+----------+----------+----------+----------+

Copy

GetQueryBandValue without QueryBandIn parameter

Teradata
 SELECT
GETQUERYBANDVALUE(2, 'hola') as Example1,
GETQUERYBANDVALUE(2, 'adios') as Example2;
Copy
+----------+----------+
| EXAMPLE1 | EXAMPLE2 |
+----------+----------+
| hello    | bye      |
+----------+----------+

Copy
Snowflake
 SELECT
GETQUERYBANDVALUE_UDF('hola') as Example1,
GETQUERYBANDVALUE_UDF('adios') as Example2;
Copy
+----------+----------+
| EXAMPLE1 | EXAMPLE2 |
+----------+----------+
| hello    | bye      |
+----------+----------+

Copy

Note

Some parts in the output code are omitted for clarity reasons.

Known Issues

1. GetQueryBandValue without QueryBandIn parameter only supported for session

Teradata allows defining query bands at transaction, session or profile levels. If GetQueryBandValue is called without specifying an input query band Teradata will automatically check the transaction, session or profile query bands depending on the value of the SearchType parameter.

In Snowflake the closest equivalent to query bands are query tags, which can be specified for session, user and account.

Due to these differences, the implementation of GetQueryBandValue without QueryBandIn parameter only considers the session query tag and may not work as expected for other search types.

Related EWIs

No related EWIs.

JSON_CHECK

Description

The JSON_CHECK function checks a string for valid JSON.

For more information regarding Teradata JSON_CHECK, check here.

 [TD_SYSFNLIB.]JSON_CHECK(string_expr);
Copy

Sample Source Pattern

Basic Source Pattern

Teradata
 SELECT JSON_CHECK('{"key": "value"}');
Copy
Snowflake Scripting
 SELECT
IFNULL(CHECK_JSON('{"key": "value"}'), 'OK');
Copy

JSON_CHECK inside CASE transformation

Teradata
 SELECT CASE WHEN JSON_CHECK('{}') = 'OK' then 'OKK' ELSE 'NOT OK' END;
Copy
Snowflake Scripting
 SELECT
CASE
WHEN IFNULL(CHECK_JSON('{}'), 'OK') = 'OK'
THEN 'OKK' ELSE 'NOT OK'
END;
Copy

Known Issues

No issues were found.

Related EWIs

No related EWIs.

JSON_EXTRACT

Description

As per Teradata’s documentation, these functions use the JSONPath Query Syntax to request information about a portion of a JSON instance. The entity desired can be any portion of a JSON instance, such as a name/value pair, an object, an array, an array element, or a value.

For more information regarding Teradata JSONExtractValue, JSONExtractLargeValue and JSONExtract, check here.

 JSON_expr.JSONExtractValue(JSONPath_expr)

JSON_expr.JSONExtractLargeValue(JSONPath_expr)

JSON_expr.JSONExtract(JSONPath_expr)
Copy

The JSON_EXTRACT_UDF is a Snowflake implementation of the JSONPath specification that uses a modified version of the original JavaScript implementation developed by Stefan Goessner.

Sample Source Pattern

Teradata
 SELECT
    Store.JSONExtract('$..author') as AllAuthors,
    Store.JSONExtractValue('$..book[2].title') as ThirdBookTitle,
    Store.JSONExtractLargeValue('$..book[2].price') as ThirdBookPrice
FROM BookStores;
Copy
Snowflake Scripting
 SELECT
    JSON_EXTRACT_UDF(Store, '$..author', FALSE) as AllAuthors,
    JSON_EXTRACT_UDF(Store, '$..book[2].title', TRUE) as ThirdBookTitle,
    JSON_EXTRACT_UDF(Store, '$..book[2].price', TRUE) as ThirdBookPrice
    FROM
    BookStores;
Copy

Note

Some parts in the output code are omitted for clarity reasons.

Known Issues

1. Elements inside JSONs may not retain their original order.

Elements inside a JSON are ordered by their keys when inserted in a table. Thus, the query results might differ. However, this does not affect the order of arrays inside the JSON.

For example, if the original JSON is:

 { 
   "firstName":"Peter",
   "lastName":"Andre",
   "age":31,
   "cities": ["Los Angeles", "Lima", "Buenos Aires"]
}
Copy

Using the Snowflake PARSE_JSON() that interprets an input string as a JSON document, producing a VARIANT value. The inserted JSON will be:

 { 
   "age": 31,
   "cities": ["Los Angeles", "Lima", "Buenos Aires"],
   "firstName": "Peter",
   "lastName": "Andre" 
}
Copy

Note how “age” is now the first element. However, the array of “cities” maintains its original order.

Related EWIs

No related EWIs.

JSON_TABLE

Note

Some parts in the output code are omitted for clarity reasons.

Description

Creates a table based on the contents of a JSON document. See JSON_TABLE documentation.

[TD_SYSFNLIB.]JSON_TABLE(
  ON (json_documents_retrieving_expr)
  USING 
      ROWEXPR (row_expr_literal) 
      COLEXPR (column_expr_literal)
  [AS] correlation_name [(column_name [,...])]
)

Copy

The conversion of JSON_TABLE has the considerations shown below:

  • ROW_NUMBER() is an equivalent of ordinal columns in Snowflake.

  • In Teradata, the second column of JSON_TABLE must be JSON type because the generated columns replace the second column, for that reason, Snowconvert assumes that the column has the right type, and uses it for the transformation.

Sample Source Patterns

Setup data

Teradata
 create table myJsonTable(
 col1 integer,
 col2 JSON(1000)
 );
 
 
insert into myJsonTable values(1, 
new json('{
"name": "Matt",
"age" : 30,
"songs" : [
	{"name" : "Late night", "genre" : "Jazz"},
	{"name" : "Wake up", "genre" : "Rock"},
	{"name" : "Who am I", "genre" : "Rock"},
	{"name" : "Raining", "genre" : "Blues"}
]
}'));
Copy
Snowflake
 CREATE OR REPLACE TABLE myJsonTable (
 col1 integer,
 col2 VARIANT
 )
 COMMENT = '{"origin":"sf_sc","name":"snowconvert","version":{"major":1, "minor":0},{"attributes":{"component":"teradata"}}'
;

INSERT INTO myJsonTable
VALUES (1, TO_JSON(PARSE_JSON('{
"name": "Matt",
"age" : 30,
"songs" : [
	{"name" : "Late night", "genre" : "Jazz"},
	{"name" : "Wake up", "genre" : "Rock"},
	{"name" : "Who am I", "genre" : "Rock"},
	{"name" : "Raining", "genre" : "Blues"}
]
}')));
Copy

Pattern code 1

Teradata
 SELECT * FROM
JSON_TABLE(ON (SELECT COL1, COL2 FROM myJsonTable WHERE col1 = 1)
USING rowexpr('$.songs[*]')
colexpr('[ {"jsonpath" : "$.name",
            "type" : "CHAR(20)"},
            {"jsonpath" : "$.genre",
             "type" : "VARCHAR(20)"}]')) AS JT(ID, "Song name", Genre);
Copy
ID | Song name  | Genre |
---+------------+-------+
1  | Late night | Jazz  |
---+------------+-------+
1  | Wake up    | Rock  |
---+------------+-------+
1  | Who am I   | Rock  |
---+------------+-------+
1  | Raining    | Blues |

Copy
Snowflake
 SELECT
* FROM
(
SELECT
COL1 AS ID,
rowexpr.value:name :: CHAR(20) AS "Song name",
rowexpr.value:genre :: VARCHAR(20) AS Genre
FROM
myJsonTable,
TABLE(FLATTEN(INPUT => COL2:songs)) rowexpr
WHERE col1 = 1
) JT;
Copy
ID | Song name  | Genre |
---+------------+-------+
1  | Late night | Jazz  |
---+------------+-------+
1  | Wake up    | Rock  |
---+------------+-------+
1  | Who am I   | Rock  |
---+------------+-------+
1  | Raining    | Blues |

Copy

Known Issues

1. The JSON path in COLEXPR can not have multiple asterisk accesses

The columns JSON path cannot have multiple lists with asterisk access, for example: $.Names[*].FullNames[*]. On the other hand, the JSON path of ROWEXP can have it.

2. JSON structure defined in the COLEXPR literal must be a valid JSON

When it is not the case the user will be warned about the JSON being badly formed.

Related EWIs

No related EWIs.

NEW JSON

Description

Allocates a new instance of a JSON datatype. For more information check NEW JSON Constructor Expression.

NEW JSON ( [ JSON_string_spec | JSON_binary_data_spec ] )

JSON_string_spec := JSON_String_literal [, { LATIN | UNICODE | BSON | UBJSON } ]

JSON_binary_data_spec := JSON_binary_literal [, { BSON | UBJSON } ]

Copy

The second parameter of the NEW JSON function is always omitted by SnowConvert since Snowflake works only with UTF-8.

Sample Source Patterns

NEW JSON with string data

Teradata
 SELECT NEW JSON ('{"name" : "cameron", "age" : 24}'),
NEW JSON ('{"name" : "cameron", "age" : 24}', LATIN);
Copy
COLUMN1                     | COLUMN2                     |
----------------------------+-----------------------------+
{"age":24,"name":"cameron"} | {"age":24,"name":"cameron"} |

Copy
Snowflake
 SELECT
TO_JSON(PARSE_JSON('{"name" : "cameron", "age" : 24}')),
!!!RESOLVE EWI!!! /*** SSC-EWI-TD0039 - INPUT FORMAT 'LATIN' NOT SUPPORTED ***/!!!
TO_JSON(PARSE_JSON('{"name" : "cameron", "age" : 24}'));
Copy
COLUMN1                     | COLUMN2                     |
----------------------------+-----------------------------+
{"age":24,"name":"cameron"} | {"age":24,"name":"cameron"} |

Copy

Known Issues

1. The second parameter is not supported

The second parameter of the function used to specify the format of the resulting JSON is not supported because Snowflake only supports UTF-8, this may result in functional differences for some uses of the function.

2. JSON with BINARY data is not supported

Snowflake does not support parsing binary data to create a JSON value, the user will be warned when SnowConvert finds a NEW JSON with binary data.

Related EWIs

  1. SSC-EWI-TD0039: Input format not supported.

NVP

Description

Extracts the value of the key-value pair where the key matches the nth occurrence of the specified name to search. See NVP.

[TD_SYSFNLIB.] NVP (
in_string,
name_to_search
[, name_delimiters ]
[, value_delimiters ]
[, occurrence ]
)

Copy

Sample Source Patterns

NVP basic case

Teradata
 SELECT
NVP('entree=-orange chicken&entree+.honey salmon', 'entree', '&', '=- +.', 1),
NVP('Hello=bye|name=Lucas|Hello=world!', 'Hello', '|', '=', 2),
NVP('Player=Mario$Game&Tenis%Player/Susana$Game=Chess', 'Player', '% $', '= & /', 2);
Copy
COLUMN1        | COLUMN2 | COLUMN3 |
---------------+---------+---------+
orange chicken | world!  | Susana  |

Copy
Snowflake
 SELECT
PUBLIC.NVP_UDF('entree=-orange chicken&entree+.honey salmon', 'entree', '&', '=- +.', 1),
PUBLIC.NVP_UDF('Hello=bye|name=Lucas|Hello=world!', 'Hello', '|', '=', 2),
PUBLIC.NVP_UDF('Player=Mario$Game&Tenis%Player/Susana$Game=Chess', 'Player', '% $', '= & /', 2);
Copy
COLUMN1        | COLUMN2 | COLUMN3 |
---------------+---------+---------+
orange chicken | world!  | Susana  |

Copy

NVP with optional parameters ignored

Teradata
 SELECT
NVP('City=Los Angeles&Color=Green&Color=Blue&City=San Jose', 'Color'),
NVP('City=Los Angeles&Color=Green&Color=Blue&City=San Jose', 'Color', 2),
NVP('City=Los Angeles#Color=Green#Color=Blue#City=San Jose', 'City', '#', '=');
Copy
COLUMN1 | COLUMN2 | COLUMN3     |
--------+---------+-------------+
Green   | Blue    | Los Angeles |

Copy
Snowflake
 SELECT
    PUBLIC.NVP_UDF('City=Los Angeles&Color=Green&Color=Blue&City=San Jose', 'Color', '&', '=', 1),
    PUBLIC.NVP_UDF('City=Los Angeles&Color=Green&Color=Blue&City=San Jose', 'Color', '&', '=', 2),
    PUBLIC.NVP_UDF('City=Los Angeles#Color=Green#Color=Blue#City=San Jose', 'City', '#', '=', 1);
Copy
COLUMN1 | COLUMN2 | COLUMN3     |
--------+---------+-------------+
Green   | Blue    | Los Angeles |

Copy

NVP with spaces in delimiters

Teradata
 SELECT
NVP('store = whole foods&&store: ?Bristol farms','store', '&&', '\ =\  :\ ?', 2),
NVP('Hello = bye|name = Lucas|Hello = world!', 'Hello', '|', '\ =\ ', 2);
Copy
COLUMN1       | COLUMN2 |
--------------+---------+
Bristol farms | world!  |

Copy
Snowflake
 SELECT
PUBLIC.NVP_UDF('store = whole foods&&store: ?Bristol farms', 'store', '&&', '\\ =\\  :\\ ?', 2),
PUBLIC.NVP_UDF('Hello = bye|name = Lucas|Hello = world!', 'Hello', '|', '\\ =\\ ', 2);
Copy
 COLUMN1       | COLUMN2 |
--------------+---------+
Bristol farms | world!  |
Copy

NVP with non-literal delimiters

Teradata
 SELECT NVP('store = whole foods&&store: ?Bristol farms','store', '&&', valueDelimiter, 2);
Copy
Snowflake
 SELECT
PUBLIC.NVP_UDF('store = whole foods&&store: ?Bristol farms', 'store', '&&', valueDelimiter, 2) /*** SSC-FDM-TD0008 - WHEN NVP_UDF FOURTH PARAMETER IS NON-LITERAL AND IT CONTAINS A BACKSLASH, THAT BACKSLASH NEEDS TO BE ESCAPED ***/;
Copy

Known Issues

1. Delimiters with spaces (\ ) need to have the backslash scaped in Snowflake

In Teradata, delimiters including space specify them using “\ “ (see NVP with spaces in delimiters), as shown in the examples, in Teradata it is not necessary to escape the backslash, however, it is necessary in Snowflake. Escaping the backslashes in the delimiter can be done automatically by SnowConvert but only if the delimiter values are literal strings, otherwise the user will be warned that the backlashes could not be escaped and that it may cause different results in Snowflake.

Related EWIs

  1. SSC-FDM-TD0008: Non-literal delimiters with spaces need their backslash scaped in snowflake.

OVERLAPS

Description

According to Teradata’s documentation, the OVERLAPS operator compares two or more period expressions. If they overlap, it returns true.

For more information regarding Teradata’s OVERLAPS, check here.

 period_expression
OVERLAPS
period_expression
Copy

The PERIOD_OVERLAPS_UDF is a Snowflake implementation of the OVERLAPS operator in Teradata.

Sample Source Pattern

Teradata

 SELECT 
    PERIOD(DATE '2009-01-01', DATE '2010-09-24') 
    OVERLAPS
    PERIOD(DATE '2009-02-01', DATE '2009-06-24');
Copy

Snowflake Scripting

 SELECT
    PUBLIC.PERIOD_OVERLAPS_UDF(ARRAY_CONSTRUCT(PUBLIC.PERIOD_UDF(DATE '2009-01-01', DATE '2010-09-24') !!!RESOLVE EWI!!! /*** SSC-EWI-TD0053 - SNOWFLAKE DOES NOT SUPPORT THE PERIOD DATATYPE, ALL PERIODS ARE HANDLED AS VARCHAR INSTEAD ***/!!!, PUBLIC.PERIOD_UDF(DATE '2009-02-01', DATE '2009-06-24') !!!RESOLVE EWI!!! /*** SSC-EWI-TD0053 - SNOWFLAKE DOES NOT SUPPORT THE PERIOD DATATYPE, ALL PERIODS ARE HANDLED AS VARCHAR INSTEAD ***/!!!)) !!!RESOLVE EWI!!! /*** SSC-EWI-TD0053 - SNOWFLAKE DOES NOT SUPPORT THE PERIOD DATATYPE, ALL PERIODS ARE HANDLED AS VARCHAR INSTEAD ***/!!!;
Copy

Known Issues

1. Unsupported Period Expressions

The PERIOD(TIME WITH TIME ZONE) and PERIOD(TIMESTAMP WITH TIME ZONE) expressions are not supported yet.

Related EWIs

  1. SSC-EWI-TD0053: Snowflake does not support the period datatype, all periods are handled as varchar instead

P_INTERSECT

Description

According to Teradata’s documentation, the P_INTERSECT operator compares two or more period expressions. If they overlap, it returns the common portion of the period expressions.

For more information regarding Teradata’s P_INTERSECT, check here.

 period_expression
P_INTERSECT
period_expression
Copy

The PERIOD_INTERSECT_UDF is a Snowflake implementation of the P_INTERSECT operator in Teradata.

Sample Source Pattern

Teradata

 SELECT 
    PERIOD(DATE '2009-01-01', DATE '2010-09-24') 
    P_INTERSECT 
    PERIOD(DATE '2009-02-01', DATE '2009-06-24');
Copy

Snowflake Scripting

 SELECT
    PUBLIC.PERIOD_INTERSECT_UDF(ARRAY_CONSTRUCT(PUBLIC.PERIOD_UDF(DATE '2009-01-01', DATE '2010-09-24') !!!RESOLVE EWI!!! /*** SSC-EWI-TD0053 - SNOWFLAKE DOES NOT SUPPORT THE PERIOD DATATYPE, ALL PERIODS ARE HANDLED AS VARCHAR INSTEAD ***/!!!, PUBLIC.PERIOD_UDF(DATE '2009-02-01', DATE '2009-06-24') !!!RESOLVE EWI!!! /*** SSC-EWI-TD0053 - SNOWFLAKE DOES NOT SUPPORT THE PERIOD DATATYPE, ALL PERIODS ARE HANDLED AS VARCHAR INSTEAD ***/!!!)) !!!RESOLVE EWI!!! /*** SSC-EWI-TD0053 - SNOWFLAKE DOES NOT SUPPORT THE PERIOD DATATYPE, ALL PERIODS ARE HANDLED AS VARCHAR INSTEAD ***/!!!;
Copy

Known Issues

1. Unsupported Period Expressions

The PERIOD(TIME WITH TIME ZONE) and PERIOD(TIMESTAMP WITH TIME ZONE) expressions are not supported yet.

Related EWIs

  1. SSC-EWI-TD0053: Snowflake does not support the period datatype, all periods are handled as varchar instead

PIVOT

Note

Some parts in the output code are omitted for clarity reasons.

Description

The pivot function is used to transform rows of a table into columns. For more information check the PIVOT Teradata documentation.

PIVOT ( pivot_spec )
  [ WITH with_spec [,...] ]
  [AS] derived_table_name [ ( cname [,...] ) ]
  
pivot_spec := aggr_fn_spec [,...] FOR for_spec

aggr_fn_spec := aggr_fn ( cname ) [ [AS] pvt_aggr_alias ]

for_spec := { cname IN ( expr_spec_1 [,...] ) |
( cname [,...] ) IN ( expr_spec_2 [,...] ) |
cname IN ( subquery )
}

expr_spec_1 := expr [ [AS] expr_alias_name ]

expr_spec_2 := ( expr [,...] ) [ [AS] expr_alias_name ]

with_spec := aggr_fn ( { cname [,...] | * } ) [AS] aggr_alias

Copy

Sample Source Patterns

Setup data

Teradata
 CREATE TABLE star1(
	country VARCHAR(20),
	state VARCHAR(10), 
	yr INTEGER,
	qtr VARCHAR(3),
	sales INTEGER,
	cogs INTEGER
);

insert into star1 values ('USA', 'CA', 2001, 'Q1', 30, 15);
insert into star1 values ('Canada', 'ON', 2001, 'Q2', 10, 0);
insert into star1 values ('Canada', 'BC', 2001, 'Q3', 10, 0);
insert into star1 values ('USA', 'NY', 2001, 'Q1', 45, 25);
insert into star1 values ('USA', 'CA', 2001, 'Q2', 50, 20);
Copy
Snowflake
 CREATE OR REPLACE TABLE star1 (
	country VARCHAR(20),
	state VARCHAR(10),
	yr INTEGER,
	qtr VARCHAR(3),
	sales INTEGER,
	cogs INTEGER
)
COMMENT = '{"origin":"sf_sc","name":"snowconvert","version":{"major":1, "minor":0},{"attributes":{"component":"teradata"}}'
;

INSERT INTO star1
VALUES ('USA', 'CA', 2001, 'Q1', 30, 15);

INSERT INTO star1
VALUES ('Canada', 'ON', 2001, 'Q2', 10, 0);

INSERT INTO star1
VALUES ('Canada', 'BC', 2001, 'Q3', 10, 0);

INSERT INTO star1
VALUES ('USA', 'NY', 2001, 'Q1', 45, 25);

INSERT INTO star1
VALUES ('USA', 'CA', 2001, 'Q2', 50, 20);
Copy

Basic PIVOT transformation

Teradata
 SELECT *
FROM star1 PIVOT (
	SUM(sales) FOR qtr                                                                                               
    IN ('Q1',                                                                                                     
    	'Q2', 
        'Q3')
)Tmp;
Copy
Country | State | yr   | cogs | 'Q1' | 'Q2' | 'Q3' |
--------+-------+------+------+------+------+------+
Canada	| BC	| 2001 | 0    | null | null | 10   |
--------+-------+------+------+------+------+------+
USA 	| NY	| 2001 | 25   | 45   | null | null |
--------+-------+------+------+------+------+------+
Canada 	| ON 	| 2001 | 0    | null | 10   | null |
--------+-------+------+------+------+------+------+
USA 	| CA 	| 2001 | 20   | null | 50   | null |
--------+-------+------+------+------+------+------+
USA 	| CA 	| 2001 | 15   | 30   | null | null |
--------+-------+------+------+------+------+------+

Copy
Snowflake
 SELECT
	*
FROM
	star1 PIVOT(
	SUM(sales) FOR qtr IN ('Q1',
	   	'Q2',
	       'Q3'))Tmp;
Copy
Country | State | yr   | cogs | 'Q1' | 'Q2' | 'Q3' |
--------+-------+------+------+------+------+------+
Canada	| BC	| 2001 | 0    | null | null | 10   |
--------+-------+------+------+------+------+------+
USA 	| NY	| 2001 | 25   | 45   | null | null |
--------+-------+------+------+------+------+------+
Canada 	| ON 	| 2001 | 0    | null | 10   | null |
--------+-------+------+------+------+------+------+
USA 	| CA 	| 2001 | 20   | null | 50   | null |
--------+-------+------+------+------+------+------+
USA 	| CA 	| 2001 | 15   | 30   | null | null |
--------+-------+------+------+------+------+------+

Copy

PIVOT with aliases transformation

Teradata
 SELECT *
FROM star1 PIVOT (
	SUM(sales) as ss1 FOR qtr                                                                                               
    IN ('Q1' AS Quarter1,                                                                                                     
    	'Q2' AS Quarter2, 
        'Q3' AS Quarter3)
)Tmp;
Copy
Country | State | yr   | cogs | Quarter1_ss1 | Quarter2_ss1 | Quarter3_ss1 |
--------+-------+------+------+--------------+--------------+--------------+
Canada	| BC	| 2001 | 0    | null 	     | null         | 10           |
--------+-------+------+------+--------------+--------------+--------------+
USA 	| NY	| 2001 | 25   | 45 	     | null 	    | null         |
--------+-------+------+------+--------------+--------------+--------------+
Canada 	| ON 	| 2001 | 0    | null 	     | 10 	    | null 	   |
--------+-------+------+------+--------------+--------------+--------------+
USA 	| CA 	| 2001 | 20   | null         | 50           | null         |
--------+-------+------+------+--------------+--------------+--------------+
USA 	| CA 	| 2001 | 15   | 30           | null         | null         |
--------+-------+------+------+--------------+--------------+--------------+

Copy
Snowflake
 SELECT
	*
FROM
	!!!RESOLVE EWI!!! /*** SSC-EWI-0015 - PIVOT/UNPIVOT RENAME COLUMN NOT SUPPORTED ***/!!!
	star1 PIVOT(
	SUM(sales) FOR qtr IN (
	                       !!!RESOLVE EWI!!! /*** SSC-EWI-0015 - PIVOT/UNPIVOT RENAME COLUMN NOT SUPPORTED ***/!!!
	                       'Q1',
	!!!RESOLVE EWI!!! /*** SSC-EWI-0015 - PIVOT/UNPIVOT RENAME COLUMN NOT SUPPORTED ***/!!!
	   	'Q2',
	!!!RESOLVE EWI!!! /*** SSC-EWI-0015 - PIVOT/UNPIVOT RENAME COLUMN NOT SUPPORTED ***/!!!
	       'Q3'))Tmp;
Copy
 Country | State | yr   | cogs | Quarter1_ss1 | Quarter2_ss1 | Quarter3_ss1 |
--------+-------+------+------+--------------+--------------+--------------+
Canada	| BC	| 2001 | 0    | null 	     | null         | 10           |
--------+-------+------+------+--------------+--------------+--------------+
USA 	| NY	| 2001 | 25   | 45 	     | null 	    | null         |
--------+-------+------+------+--------------+--------------+--------------+
Canada 	| ON 	| 2001 | 0    | null 	     | 10 	    | null 	   |
--------+-------+------+------+--------------+--------------+--------------+
USA 	| CA 	| 2001 | 20   | null         | 50           | null         |
--------+-------+------+------+--------------+--------------+--------------+
USA 	| CA 	| 2001 | 15   | 30           | null         | null         |
--------+-------+------+------+--------------+--------------+--------------+
Copy

Known Issues

1. WITH clause not supported

Using the WITH clause is not currently supported.

2. Pivot over multiple pivot columns not supported

Snowconvert is transforming the PIVOT function into the PIVOT function in Snowflake, which only supports applying the function over a single column.

3. Pivot with multiple aggregate functions not supported

The PIVOT function in Snowflake only supports applying one aggregate function over the data.

4. Subquery in the IN clause not supported

The IN clause of the Snowflake PIVOT function does not accept subqueries.

5. Aliases only supported if all IN clause elements have it and table specification is present

For the column names with aliases to be equivalent, SnowConvert requires that all the values specified in the IN clause have one alias specified and the table specification is present in the input code, this is necessary so SnowConvert can successfully create the alias list for the resulting table.

Related EWIs

  1. SSC-EWI-0015: The input pivot/unpivot statement form is not supported

RANK

Description

RANK sorts a result set and identifies the numeric rank of each row in the result. The only argument for RANK is the sort column or columns, and the function returns an integer that represents the rank of each row in the result. (RANK in Teradata)

Teradata syntax

 RANK ( sort_expression [ ASC | DESC ] [,...] )
Copy

Snowflake syntax

 RANK() OVER 
( 
    [ PARTITION BY <expr1> ]
    ORDER BY <expr2> [ { ASC | DESC } ] 
    [ <window_frame> ]
)
Copy

Sample Source Pattern

Setup data

Teradata
 CREATE TABLE Sales (
  Product VARCHAR(255),
  Sales INT
);

INSERT INTO Sales (Product, Sales) VALUES ('A', 100);
INSERT INTO Sales (Product, Sales) VALUES ('B', 150);
INSERT INTO Sales (Product, Sales) VALUES ('C', 200);
INSERT INTO Sales (Product, Sales) VALUES ('D', 150);
INSERT INTO Sales (Product, Sales) VALUES ('E', 120);
INSERT INTO Sales (Product, Sales) VALUES ('F', NULL);
Copy
Snowflake
 CREATE OR REPLACE TABLE Sales (
  Product VARCHAR(255),
  Sales INT
)
COMMENT = '{"origin":"sf_sc","name":"snowconvert","version":{"major":1, "minor":0},{"attributes":{"component":"teradata"}}'
;

INSERT INTO Sales (Product, Sales)
VALUES ('A', 100);

INSERT INTO Sales (Product, Sales)
VALUES ('B', 150);

INSERT INTO Sales (Product, Sales)
VALUES ('C', 200);

INSERT INTO Sales (Product, Sales)
VALUES ('D', 150);

INSERT INTO Sales (Product, Sales)
VALUES ('E', 120);

INSERT INTO Sales (Product, Sales)
VALUES ('F', NULL);
Copy

RANK() using ASC, DESC, and DEFAULT order

Teradata

Warning

Notice that Teradata’s ordering default value when calling RANK() is DESC. However, the default in Snowflake is ASC. Thus, DESC is added in the conversion of RANK() when no order is specified.

 SELECT
  Sales,
  RANK(Sales ASC) AS SalesAsc,
  RANK(Sales DESC) AS SalesDesc,
  RANK(Sales) AS SalesDefault
FROM
  Sales;
Copy
SALESSALESASCSALESDESCSALESDEFAULT
NULL666
200511
150322
150322
120244
100155
Snowflake
 SELECT
  Sales,
  RANK() OVER (
  ORDER BY
    Sales ASC) AS SalesAsc,
    RANK() OVER (
    ORDER BY
    Sales DESC NULLS LAST) AS SalesDesc,
    RANK() OVER (
    ORDER BY
    Sales DESC NULLS LAST) AS SalesDefault
    FROM
    Sales;
Copy
SALESSALESASCSALESDESCSALESDEFAULT
NULL666
200511
150322
150322
120244
100155

Known Issues

No issues were found.

Related EWIs

No related EWIs.

Regex functions

Description

Both Teradata and Snowflake offer support for functions that apply regular expressions over varchar inputs. See the Teradata documentation and Snowflake documentation for more details.

REGEXP_SUBSTR(source. regexp [, position, occurrence, match])
REGEXP_REPLACE(source. regexp [, replace_string, position, occurrence, match])
REGEXP_INSTR(source. regexp [, position, occurrence, return_option, match])
REGEXP_SIMILAR(source. regexp [, match])
REGEXP_SPLIT_TO_TABLE(inKey. source. regexp, match)

Copy

Sample Source Patterns

Setup data

Teradata
 CREATE TABLE regexpTable
(
    col1 CHAR(35)
);

INSERT INTO regexpTable VALUES('hola');
Copy
Snowflake
 CREATE OR REPLACE TABLE regexpTable
(
    col1 CHAR(35)
)
COMMENT = '{"origin":"sf_sc","name":"snowconvert","version":{"major":1, "minor":0},{"attributes":{"component":"teradata"}}'
;

INSERT INTO regexpTable
VALUES ('hola');
Copy

Regex transformation example

Teradata
 SELECT
REGEXP_REPLACE(col1,'.*(h(i|o))','ha', 1, 0, 'x'),
REGEXP_SUBSTR(COL1,'.*(h(i|o))', 2, 1, 'x'),
REGEXP_INSTR(COL1,'.*(h(i|o))',1, 1, 0, 'x'),
REGEXP_SIMILAR(COL1,'.*(h(i|o))', 'xl')
FROM regexpTable;
Copy
COLUMN1|COLUMN2|COLUMN3|COLUMN4|
-------+-------+-------+-------+
hala   |null   |1      |0      |

Copy
Snowflake
 --** SSC-FDM-0007 - MISSING DEPENDENT OBJECT "regexpTable" **
SELECT
REGEXP_REPLACE(col1, '.*(h(i|o))', 'ha', 1, 0),
REGEXP_SUBSTR(COL1, '.*(h(i|o))', 2, 1),
REGEXP_INSTR(COL1, '.*(h(i|o))', 1, 1, 0),
--** SSC-FDM-TD0016 - VALUE 'l' FOR PARAMETER 'match_arg' IS NOT SUPPORTED IN SNOWFLAKE **
REGEXP_LIKE(COL1, '.*(h(i|o))')
FROM
regexpTable;
Copy
COLUMN1|COLUMN2|COLUMN3|COLUMN4|
-------+-------+-------+-------+
hala   |null   |1      |FALSE  |

Copy

Known Issues

1. Snowflake only supports POSIX regular expressions

The user will be warned when SnowConvert finds a non-POSIX regular expression.

2. Teradata “match_arg” option ‘l’ is unsupported in Snowflake

The option ‘l’ has no counterpart in Snowflake and the user will be warned if SnowConvert finds them.

3. Fixed size of the CHAR datatype may cause different behavior

Some regex functions in Teradata will try to match the whole column of CHAR datatype in a table even if some of the characters in the column were left empty due to a smaller string being inserted. In Snowflake this does not happen because the CHAR datatype is of variable size.

4. REGEXP_SPLIT_TO_TABLE not supported

The function is currently not supported by Snowflake.

Related EWIs

  1. SSC-FDM-0007: Element with missing dependencies.

  2. SSC-FDM-TD0016: Value ‘l’ for parameter ‘match_arg’ is not supported in Snowflake.

STRTOK_SPLIT_TO_TABLE

Description

Split a string into a table using the provided delimiters. For more information check STRTOK_SPLIT_TO_TABLE.

[TD_SYSFNLIB.] STRTOK_SPLIT_TO_TABLE ( inkey, instring, delimiters )
  RETURNS ( outkey, tokennum, token )

Copy

Sample Source Patterns

Setup data

Teradata
 CREATE TABLE strtokTable
(
	col1 INTEGER,
	col2 VARCHAR(100)
);

INSERT INTO strtokTable VALUES(4, 'hello-world-split-me');
INSERT INTO strtokTable VALUES(1, 'string$split$by$dollars');
Copy
Snowflake
 CREATE OR REPLACE TABLE strtokTable
(
	col1 INTEGER,
	col2 VARCHAR(100)
)
COMMENT = '{"origin":"sf_sc","name":"snowconvert","version":{"major":1, "minor":0},{"attributes":{"component":"teradata"}}'
;

INSERT INTO strtokTable
VALUES (4, 'hello-world-split-me');

INSERT INTO strtokTable
VALUES (1, 'string$split$by$dollars');
Copy

STRTOK_SPLIT_TO_TABLE transformation

Teradata
 SELECT outkey, tokennum, token FROM table(STRTOK_SPLIT_TO_TABLE(strtokTable.col1, strtokTable.col2, '-$')
RETURNS (outkey INTEGER, tokennum INTEGER, token VARCHAR(100))) AS testTable
ORDER BY outkey, tokennum;
Copy
outkey |tokennum | token  |
-------+---------+--------+
1      |1        |string  |
-------+---------+--------+
1      |2        |split   |
-------+---------+--------+
1      |3        |by      |
-------+---------+--------+
1      |4        |dollars |
-------+---------+--------+
4      |1        |hello   |
-------+---------+--------+
4      |2        |world   |
-------+---------+--------+
4      |3        |split   |
-------+---------+--------+
4      |4        |me      |

Copy
Snowflake
 SELECT
CAST(strtokTable.col1 AS INTEGER) AS outkey,
CAST(INDEX AS INTEGER) AS tokennum,
CAST(VALUE AS VARCHAR) AS token
FROM
strtokTable,
table(STRTOK_SPLIT_TO_TABLE(strtokTable.col2, '-$')) AS testTable
ORDER BY outkey, tokennum;
Copy
outkey |tokennum | token  |
-------+---------+--------+
1      |1        |string  |
-------+---------+--------+
1      |2        |split   |
-------+---------+--------+
1      |3        |by      |
-------+---------+--------+
1      |4        |dollars |
-------+---------+--------+
4      |1        |hello   |
-------+---------+--------+
4      |2        |world   |
-------+---------+--------+
4      |3        |split   |
-------+---------+--------+
4      |4        |me      |

Copy

Known Issues

No known issues.

Related EWIs

No related EWIs.

SUBSTRING

Description

Extracts a substring from a given input string. For more information check SUBSTRING/SUBSTR.

SUBSTRING(string_expr FROM n1 [FOR n2])

SUBSTR(string_expr, n1, [, n2])

Copy

When the value to start getting the substring (n1) is less than one SUBSTR_UDF is inserted instead.

Sample Source Patterns

SUBSTRING transformation

Teradata
 SELECT SUBSTR('Hello World!', 2, 6),
SUBSTR('Hello World!', -2, 6),
SUBSTRING('Hello World!' FROM 2 FOR 6),
SUBSTRING('Hello World!' FROM -2 FOR 6);
Copy
COLUMN1 |COLUMN2 |COLUMN3 | COLUMN4 |
--------+--------+--------+---------+
ello W  |Hel     |ello W  |Hel      |

Copy
Snowflake
 SELECT
SUBSTR('Hello World!', 2, 6),
PUBLIC.SUBSTR_UDF('Hello World!', -2, 6),
SUBSTRING('Hello World!', 2, 6),
PUBLIC.SUBSTR_UDF('Hello World!', -2, 6);
Copy
COLUMN1 |COLUMN2 |COLUMN3 | COLUMN4 |
--------+--------+--------+---------+
ello W  |Hel     |ello W  |Hel      |

Copy

Related EWIs

No related EWIs.

TD_UNPIVOT

Note

Some parts in the output code are omitted for clarity reasons.

Description

TD_UNPIVOT in Teradata can unpivot multiple columns at once, while Snowflake UNPIVOT can only unpivot a single column. The unpivot functionality is used to transform columns of the specified table into rows. For more information see TD_UNPIVOT.

[TD_SYSFNLIB.] TD_UNPIVOT (
  ON { tableName | ( query_expression ) }
  USING VALUE_COLUMNS ( 'value_columns_value' [,...] )
  UNPIVOT_COLUMN ( 'unpivot_column_value' )
  COLUMN_LIST ( 'column_list_value' [,...] )
  [ COLUMN_ALIAS_LIST ( 'column_alias_list_value' [,...] )
      INCLUDE_NULLS ( { 'No' | 'Yes' } )
  ]
)

Copy

The following transformation is able to generate a SQL query in Snowflake that unpivots multiple columns at the same time, the same way it works in Teradata.

Sample Source Patterns

Setup data title

Teradata
 CREATE TABLE superunpivottest (
	myKey INTEGER NOT NULL PRIMARY KEY,
	firstSemesterIncome DECIMAL(10,2),
	secondSemesterIncome DECIMAL(10,2),
	firstSemesterExpenses DECIMAL(10,2),
	secondSemesterExpenses DECIMAL(10,2)
);

INSERT INTO superUnpivottest VALUES (2020, 15440, 25430.57, 10322.15, 12355.36);
INSERT INTO superUnpivottest VALUES (2018, 18325.25, 25220.65, 15560.45, 15680.33);
INSERT INTO superUnpivottest VALUES (2019, 23855.75, 34220.22, 14582.55, 24122);
Copy
Snowflake
 CREATE OR REPLACE TABLE superunpivottest (
	myKey INTEGER NOT NULL PRIMARY KEY,
	firstSemesterIncome DECIMAL(10,2),
	secondSemesterIncome DECIMAL(10,2),
	firstSemesterExpenses DECIMAL(10,2),
	secondSemesterExpenses DECIMAL(10,2)
)
COMMENT = '{"origin":"sf_sc","name":"snowconvert","version":{"major":1, "minor":0},{"attributes":{"component":"teradata"}}'
;

INSERT INTO superUnpivottest
VALUES (2020, 15440, 25430.57, 10322.15, 12355.36);

INSERT INTO superUnpivottest
VALUES (2018, 18325.25, 25220.65, 15560.45, 15680.33);

INSERT INTO superUnpivottest
VALUES (2019, 23855.75, 34220.22, 14582.55, 24122);
Copy

TD_UNPIVOT transformation

Teradata
 SELECT * FROM
 TD_UNPIVOT(
 	ON superunpivottest
 	USING
 	VALUE_COLUMNS('Income', 'Expenses')
 	UNPIVOT_COLUMN('Semester')
 	COLUMN_LIST('firstSemesterIncome, firstSemesterExpenses', 'secondSemesterIncome, secondSemesterExpenses')
    COLUMN_ALIAS_LIST('First', 'Second')
 )X ORDER BY mykey, Semester;
Copy
myKey |Semester |Income   | Expenses |
------+---------+---------+----------+
2018  |First    |18325.25 |15560.45  |
------+---------+---------+----------+
2018  |Second   |25220.65 |15680.33  |
------+---------+---------+----------+
2019  |First    |23855.75 |14582.55  |
------+---------+---------+----------+
2019  |Second   |34220.22 |24122.00  |
------+---------+---------+----------+
2020  |First    |15440.00 |10322.15  |
------+---------+---------+----------+
2020  |Second   |25430.57 |12355.36  |


Copy
Snowflake
 SELECT
 * FROM
 !!!RESOLVE EWI!!! /*** SSC-EWI-TD0061 - TD_UNPIVOT TRANSFORMATION REQUIRES COLUMN INFORMATION THAT COULD NOT BE FOUND, COLUMNS MISSING IN RESULT ***/!!!
 (
  SELECT
   TRIM(GET_IGNORE_CASE(OBJECT_CONSTRUCT('FIRSTSEMESTERINCOME', 'First', 'FIRSTSEMESTEREXPENSES', 'First', 'SECONDSEMESTERINCOME', 'Second', 'SECONDSEMESTEREXPENSES', 'Second'), Semester), '"') AS Semester,
   Income,
   Expenses
  FROM
   superunpivottest UNPIVOT(Income FOR Semester IN (
    firstSemesterIncome,
    secondSemesterIncome
   )) UNPIVOT(Expenses FOR Semester1 IN (
    firstSemesterExpenses,
    secondSemesterExpenses
   ))
  WHERE
   Semester = 'FIRSTSEMESTERINCOME'
   AND Semester1 = 'FIRSTSEMESTEREXPENSES'
   OR Semester = 'SECONDSEMESTERINCOME'
   AND Semester1 = 'SECONDSEMESTEREXPENSES'
 ) X ORDER BY mykey, Semester;
Copy
myKey |Semester |Income   | Expenses |
------+---------+---------+----------+
2018  |First    |18325.25 |15560.45  |
------+---------+---------+----------+
2018  |Second   |25220.65 |15680.33  |
------+---------+---------+----------+
2019  |First    |23855.75 |14582.55  |
------+---------+---------+----------+
2019  |Second   |34220.22 |24122.00  |
------+---------+---------+----------+
2020  |First    |15440.00 |10322.15  |
------+---------+---------+----------+
2020  |Second   |25430.57 |12355.36  |

Copy

Known Issues

  1. TD_UNPIVOT with INCLUDE_NULLS clause set to YES is not supported

    Snowflake UNPIVOT function used in the transformation will ignore null values always, and the user will be warned that the INCLUDE_NULLS clause is not supported when it is set to YES.

  2. Table information is required to correctly transform the function

    SnowConvert needs the name of the columns that are being used in the TD_UNPIVOT function; if the user does not include the columns list in the query_expression of the function but provides the name of the table being unpivoted, then it will try to retrieve the column names from the table definition. If the names can not be found then the user will be warned that the resulting query might be losing columns in the result.

Related EWIs

  1. SSC-EWI-TD0061: TD_UNPIVOT transformation requires column information that could not be found, columns missing in result.

TO_CHAR

Description

The TO_CHAR function casts a DateTime or numeric value to a string. For more information check TO_CHAR(Numeric) and TO_CHAR(DateTime).

-- Numeric version
[TD_SYSFNLIB.]TO_CHAR(numeric_expr [, format_arg [, nls_param]])

-- DateTime version
[TD_SYSFNLIB.]TO_CHAR(dateTime_expr [, format_arg])

Copy

Both Snowflake and Teradata have their own version of the TO_CHAR function, however, Teradata supports plenty of formats that are not natively supported by Snowflake. To support these format elements SnowConvert uses Snowflake built-in functions and custom UDFs to generate a concatenation expression that produces the same string as the original TO_CHAR function in Teradata.

Sample Source Patterns

TO_CHAR(DateTime) transformation

Teradata
 SELECT 
TO_CHAR(date '2012-12-23'),
TO_CHAR(date '2012-12-23', 'DS'),
TO_CHAR(date '2012-12-23', 'DAY DD, MON YY');
Copy
COLUMN1    | COLUMN2    | COLUMN3           |
-----------+------------+-------------------+
2012/12/23 | 12/23/2012 | SUNDAY 23, DEC 12 |

Copy
Snowflake
 SELECT
TO_CHAR(date '2012-12-23') /*** SSC-FDM-TD0029 - SNOWFLAKE SUPPORTED FORMATS FOR TO_CHAR DIFFER FROM TERADATA AND MAY FAIL OR HAVE DIFFERENT BEHAVIOR ***/,
TO_CHAR(date '2012-12-23', 'MM/DD/YYYY') /*** SSC-FDM-TD0029 - SNOWFLAKE SUPPORTED FORMATS FOR TO_CHAR DIFFER FROM TERADATA AND MAY FAIL OR HAVE DIFFERENT BEHAVIOR ***/,
PUBLIC.DAYNAME_LONG_UDF(date '2012-12-23', 'uppercase') || TO_CHAR(date '2012-12-23', ' DD, ') || PUBLIC.MONTH_SHORT_UDF(date '2012-12-23', 'uppercase') || TO_CHAR(date '2012-12-23', ' YY') /*** SSC-FDM-TD0029 - SNOWFLAKE SUPPORTED FORMATS FOR TO_CHAR DIFFER FROM TERADATA AND MAY FAIL OR HAVE DIFFERENT BEHAVIOR ***/;
Copy
COLUMN1    | COLUMN2    | COLUMN3           |
-----------+------------+-------------------+
2012/12/23 | 12/23/2012 | SUNDAY 23, DEC 12 |

Copy

TO_CHAR(Numeric) transformation

Teradata
 SELECT
TO_CHAR(1255.495),
TO_CHAR(1255.495, '9.9EEEE'),
TO_CHAR(1255.495, 'SC9999.9999', 'nls_iso_currency = ''EUR''');
Copy
COLUMN1  | COLUMN2 | COLUMN3       |
---------+---------+---------------+
1255.495 | 1.3E+03 | +EUR1255.4950 |

Copy
Snowflake
 SELECT
TO_CHAR(1255.495) /*** SSC-FDM-TD0029 - SNOWFLAKE SUPPORTED FORMATS FOR TO_CHAR DIFFER FROM TERADATA AND MAY FAIL OR HAVE DIFFERENT BEHAVIOR ***/,
TO_CHAR(1255.495, '9.0EEEE') /*** SSC-FDM-TD0029 - SNOWFLAKE SUPPORTED FORMATS FOR TO_CHAR DIFFER FROM TERADATA AND MAY FAIL OR HAVE DIFFERENT BEHAVIOR ***/,
PUBLIC.INSERT_CURRENCY_UDF(TO_CHAR(1255.495, 'S9999.0000'), 2, 'EUR') /*** SSC-FDM-TD0029 - SNOWFLAKE SUPPORTED FORMATS FOR TO_CHAR DIFFER FROM TERADATA AND MAY FAIL OR HAVE DIFFERENT BEHAVIOR ***/;
Copy
COLUMN1  | COLUMN2 | COLUMN3       |
---------+---------+---------------+
1255.495 | 1.3E+03 | +EUR1255.4950 |

Copy

Known Issues

1. Formats with different or unsupported behaviors

Teradata offers an extensive list of format elements that may show different behavior in Snowflake after the transformation of the TO_CHAR function. For the list of elements with different or unsupported behaviors check SSC-EWI-TD0029.

Related EWIs

  1. SSC-FDM-TD0029: Snowflake supported formats for TO_CHAR differ from Teradata and may fail or have different behavior.

XMLAGG

Description

Construct an XML value by performing an aggregation of multiple rows. For more information check XMLAGG.

XMLAGG (
  XML_value_expr
  [ ORDER BY order_by_spec [,...] ]
  [ RETURNING { CONTENT | SEQUENCE } ]
)

order_by_spec := sort_key [ ASC | DESC ] [ NULLS { FIRST | LAST } ]

Copy

Sample Source Patterns

Setup data

Teradata
 create table orders (
	o_orderkey int, 
	o_totalprice float);

insert into orders values (1,500000);
insert into orders values (2,100000);
insert into orders values (3,600000);
insert into orders values (4,700000);
Copy
Snowflake
 CREATE OR REPLACE TABLE orders (
	o_orderkey int,
	o_totalprice float)
COMMENT = '{"origin":"sf_sc","name":"snowconvert","version":{"major":1, "minor":0},{"attributes":{"component":"teradata"}}'
;

INSERT INTO orders
VALUES (1,500000);

INSERT INTO orders
VALUES (2,100000);

INSERT INTO orders
VALUES (3,600000);

INSERT INTO orders
VALUES (4,700000);
Copy

XMLAGG transformation

Teradata
 select 
    xmlagg(o_orderkey order by o_totalprice desc) (varchar(10000))
from orders
where o_totalprice > 5;
Copy
COLUMN1 |
--------+
4 3 1 2 |

Copy
Snowflake
 SELECT
    LEFT(TO_VARCHAR(LISTAGG ( o_orderkey, ' ')
    WITHIN GROUP(
 order by o_totalprice DESC NULLS LAST)), 10000)
    from
    orders
    where o_totalprice > 5;
Copy
COLUMN1 |
--------+
4 3 1 2 |

Copy

Known Issues

1. The RETURNING clause is currently not supported.

The user will be warned that the translation of the returning clause will be added in the future.

Related EWIs

No related EWIs.