SnowConvert: Redshift Functions

System Functions

IDENTITY

Description

The IDENTITY function is a system function that operates on a specified column of a table to determine the initial value for the identity. If the initial value is not available, it defaults to the value provided in the function. This will be translation to a Sequence in Snowflake.

Grammar Syntax

 "identity"(oid_id, oid_table_id, default)
Copy

Note

This function is no longer supported in Redshift. It uses the default value to define the identity and behaves like a standard identity column.

Sample Source Patterns

Input Code:

 CREATE TABLE IF NOT EXISTS table_test
(
    id integer,
    inventory_combo BIGINT  DEFAULT "identity"(850178, 0, '5,3'::text)
);

INSERT INTO table_test (id) VALUES
    (1),
    (2),
    (3),
    (4);

SELECT * FROM table_test;
Copy

id

inventory_combo

1

5

2

8

3

11

3

14

Output Code:

 CREATE TABLE IF NOT EXISTS table_test
(
    id integer,
    inventory_combo BIGINT IDENTITY(5,3) ORDER
)
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "11/13/2024",  "domain": "test" }}';

INSERT INTO table_test (id) VALUES
    (1),
    (2),
    (3),
    (4);

SELECT * FROM
    table_test;
Copy

id

inventory_combo

1

5

2

8

3

11

3

14

String Functions

BTRIM

Description

The BTRIM function trims a string by removing leading and trailing blanks or by removing leading and trailing characters that match an optional specified string. (RedShift SQL Language Reference BTRIM function)

Grammar Syntax

 BTRIM(string [, trim_chars ] )
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 SELECT '     abc    ' AS untrim, btrim('     abc    ') AS trim;

SELECT 'setuphistorycassettes' AS untrim, btrim('setuphistorycassettes', 'tes') AS trim;
Copy

utrim

trim

abc

abc

untrim

trim

setuphistorycassettes

uphistoryca

Output Code:

 SELECT '     abc    ' AS untrim,
TRIM('     abc    ') AS trim;

SELECT 'setuphistorycassettes' AS untrim,
TRIM('setuphistorycassettes', 'tes') AS trim;
Copy

utrim

trim

abc

abc

untrim

trim

setuphistorycassettes

uphistoryca

Know Issues

No issues were found.

Related EWIs

There are no known issues.

CONCAT

Description

The CONCAT function concatenates two expressions and returns the resulting expression. (RedShift SQL Language References CONCAT function).

Grammar Syntax

 CONCAT ( expression1, expression2 )
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 CREATE TABLE test_concat_function (
    col1 varchar
 );

INSERT INTO test_concat_function
VALUES ('name');

SELECT  CONCAT(col1, ' TEST '),
        "CONCAT"(col1, ' TEST '),
        col1 || ' TEST ' 
FROM test_concat_function;
Copy
CONCAT(col1, ' TEST ')"CONCAT"(col1, ' TEST ')col1 || ' TEST '
name TESTname TESTname TEST

Output Code:

 CREATE TABLE test_concat_function (
    col1 varchar
 )
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "11/20/2024",  "domain": "test" }}';

INSERT INTO test_concat_function
VALUES ('name');

SELECT  CONCAT(col1, ' TEST '),
        CONCAT(col1, ' TEST '),
        col1 || ' TEST '
FROM
        test_concat_function;
Copy
CONCAT(NAME, ' TEST ') "CONCAT"(NAME, ' TEST ')NAME || ' TEST '
name TESTname TESTname TEST

Related EWIs

There are no known issues.

LEFT and RIGHT

Description

These functions return the specified number of leftmost or rightmost characters from a character string. (SQL Language References LEFT and RIGHT function).

Grammar Syntax

 LEFT( string,  integer )

RIGHT( string,  integer )
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 SELECT LEFT('Chicago', 3) AS left_3, RIGHT('Chicago', 3) AS right_3;
Copy

left_3

right_3

Chi

ago

Output Code:

 SELECT LEFT('Chicago', 3) AS left_3, RIGHT('Chicago', 3) AS right_3;
Copy

left_3

right_3

Chi

ago

Know Issues

In Snowflake and Redshift, the LEFT and RIGHT functions handle negative values differently:

  • Snowflake: Returns an empty string when the second argument is negative.

  • Redshift: Raises a runtime error with negative values.

Related EWIs

There are no known issues.

LOWER

Description

The LOWER function converts a string to lowercase. (RedShift SQL Language Reference Lower function)

Grammar Syntax

 LOWER ( string )
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 CREATE TABLE test_lower_function (
    col1 varchar
 );

INSERT INTO test_lower_function
VALUES ('test'),
       ('Test'),
       ('TEST');

SELECT  LOWER(COL1),
        "LOWER"(COL1),
        LOWER('vaLues')
FROM test_lower_function;
Copy
LOWER(COL1)"LOWER"(COL1)LOWER('vaLues')
testtestvalues
testtestvalues
testtestvalues

Output Code:

 CREATE TABLE test_lower_function (
    col1 varchar
 )
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "11/21/2024",  "domain": "test" }}';

INSERT INTO test_lower_function
VALUES ('test'),
       ('Test'),
       ('TEST');

SELECT  LOWER(COL1),
        LOWER(COL1),
        LOWER('vaLues')
FROM
        test_lower_function;
Copy

LOWER(COL1)

“LOWER”(COL1)

LOWER(‘vaLues’)

test

test

values

test

test

values

test

test

values

Related EWIs

There are no known issues.

QUOTE_IDENT

Description

The QUOTE_IDENT function returns the specified string as a string with a leading double quotation mark and a trailing double quotation mark. (RedShift SQL Language References QUOTE_IDENT function).

To replicated the functionality of this function, it is converted to a CONCAT function.

Grammar Syntax

 QUOTE_IDENT(string)
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 SELECT 
    QUOTE_IDENT('"CAT"'),
    "QUOTE_IDENT"('Foo bar') ;
Copy

quote_ident

quote_ident

“””CAT”””

“Foo bar”””

Output Code:

 SELECT
    CONCAT('"', REPLACE('"CAT"', '"', '""'), '"'),
    CONCAT('"', REPLACE('Foo bar', '"', '""'), '"');
Copy

quote_ident

quote_ident

“””CAT”””

“Foo bar”””

Known Issues

No issues were found.

Related EWIs

There are no known issues.

REGEXP_REPLACE

Description

These function searches a string for a regular expression pattern and replaces every occurrence of the pattern with the specified string. (SQL Language References REGEXP_REPLACE function).

Grammar Syntax

 REGEXP_REPLACE( source_string, pattern [, replace_string [ , position [, parameters ] ] ] )
Copy

Warning

This function is partially supported by Snowflake.

Sample Source Patterns

Input Code:

 CREATE TABLE my_table (col1 varchar);
SELECT regexp_replace('the fox', 'FOX', 'quick brown fox', 1, 'ip');
SELECT 
regexp_replace(d, '[hidden]'),
regexp_replace(d, f)
FROM 
(SELECT 
regexp_replace('pASswd7','(?=[^ ]*[a-z])(?=[^ ]*[0-9])[^ ]+') as d, 'passwd7' as f);
SELECT regexp_replace(col1, 'passwd7', '[hidden]', 1, 'ip') as rp from my_table;
Copy

regexp_replace

regexp_replace

regexp_replace

the quick brown fox

pASsw7

pASsw7

Output Code:

 CREATE TABLE my_table (col1 varchar)
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "00/00/0000",  "domain": "test" }}';
SELECT
REGEXP_REPLACE('the fox', 'FOX', 'quick brown fox', 1, 0, 'i');
SELECT
REGEXP_REPLACE(d, '[hidden]'),
REGEXP_REPLACE(d,
                  --** SSC-FDM-0032 - PARAMETER 'regex_string' IS NOT A LITERAL VALUE, TRANSFORMATION COULD NOT BE FULLY APPLIED **
                  f)
FROM
(SELECT
REGEXP_REPLACE('pASswd7',
                         !!!RESOLVE EWI!!! /*** SSC-EWI-0009 - regexp_replace FUNCTION ONLY SUPPORTS POSIX REGULAR EXPRESSIONS ***/!!!
                         '(?=[^ ]*[a-z])(?=[^ ]*[0-9])[^ ]+') as d, 'passwd7' as f);
SELECT
REGEXP_REPLACE(col1, 'passwd7', '[hidden]', 1, 0, 'i') as rp from
my_table;
Copy

REGEXP_REPLACE

RESULT

the quick brown fox

100048 (2201B): Invalid regular expression: ‘(?=[^ ]*[a-z])(?=[^ ]*[0-9])[^ ]+’, no argument for repetition operator: ?

Know Issues

  1. This function includes a parameters argument that enables the user to interpret the pattern using the Perl Compatible Regular Expression (PCRE) dialect, represented by the p value, this is removed to avoid any issues when fixing the code.

Related EWIs

  • SSC-EWI-0009: Regexp_Substr Function only supports POSIX regular expressions.

  • SSC-FDM-0032: Parameter is not a literal value, transformation could not be fully applied

  • SSC-FDM- PG0011: The use of the COLLATE column constraint has been disabled for this pattern-matching condition.

REPLACE

Description

Replaces all occurrences of a set of characters within an existing string with other specified characters. (Redshift SQL Language Reference REPLACE function).

This function is fully supported in Snowflake.

For more information about quoted identifiers in functions, click here.

Grammar Syntax

 REPLACE(string, old_chars, new_chars)
Copy

Sample Source Patterns

Input Code:

 SELECT REPLACE('Hello World', ' ', '_') AS modified_string,
       REPLACE('Apple, Orange, Banana, Grape', ',', '') AS fruits,
       REPLACE('http://example.com/path/to/resource', '/path/to', '/new-path') AS updated_url;
Copy

MODIFIED_STRING

FRUITS

UPDATED_URL

Hello_World

Apple Orange Banana Grape

http://example.com/new-path/resource

Output Code:
 SELECT REPLACE('Hello World', ' ', '_') AS modified_string,
       REPLACE('Apple, Orange, Banana, Grape', ',', '') AS fruits,
       REPLACE('http://example.com/path/to/resource', '/path/to', '/new-path') AS updated_url;
Copy

MODIFIED_STRING

FRUITS

UPDATED_URL

Hello_World

Apple Orange Banana Grape

http://example.com/new-path/resource

Known Issues

No issues were found.

Related EWIs

There are no known issues.

SPLIT_PART

Description

Splits a string on the specified delimiter and returns the part at the specified position.. (SQL Language References SPLIT_PART function).

Grammar Syntax

 SPLIT_PART(string, delimiter, position)
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 select split_part('abc$def$ghi','$',2) AS split_1,
split_part('abc$def$ghi','$',4) AS split_2,
split_part('abc$def$ghi','#',1) AS split_3;
Copy
split_1split_2split_3
defabc$def$ghi

Output Code:

 select split_part('abc$def$ghi','$',2) AS split_1,
split_part('abc$def$ghi','$',4) AS split_2,
split_part('abc$def$ghi','#',1) AS split_3;
Copy
split_1split_2split_3
defabc$def$ghi

Know Issues

There is a difference in how SPLIT_PART behaves when used with case-insensitive collations (CASE_INSENSITIVE or en-ci) in Snowflake vs. Redshift.

Related EWIs

There are no known issues.

SUBSTRING

Description

Returns the subset of a string based on the specified start position. (RedShift SUBSTRING function).

Grammar Syntax

 SUBSTRING(character_string FROM start_position [ FOR number_characters ] )

SUBSTRING(character_string, start_position, number_characters )

SUBSTRING(binary_expression, start_byte, number_bytes )

SUBSTRING(binary_expression, start_byte )
Copy

Warning

This function is partially supported in Snowflake.

Sample Source Patterns

Input Code:

 SELECT SUBSTRING('machine' FROM 3 for 2) AS machien_3_2, SUBSTRING('machine',1,4) AS machine_1_4;

SELECT SUBSTRING('12345'::varbyte, 2, 4) AS substring_varbyte;
Copy

machien_3_2

machine_1_4

ch

mach

Output Code:

 SELECT SUBSTRING('machine', 3, 2) AS machien_3_2, SUBSTRING('machine',1,4) AS machine_1_4;

SELECT SUBSTRING('12345':: BINARY, 2, 4) !!!RESOLVE EWI!!! /*** SSC-EWI-RS0006 - THE BEHAVIOR OF THE SUBSTRING FUNCTION MAY DIFFER WHEN APPLIED TO BINARY DATA. ***/!!! AS substring_varbyte;
Copy

machien_3_2

machine_1_4

ch

mach

Know Issues

When the start_position in Redshift is 0 or less, the SUBSTRING function performs a mathematical operation (start_position + number_characters). If the result is 0 or less, SUBSTRING returns an empty string. In Snowflake, the behavior for start_position when it is 0 or less differs, leading to different results.

Related EWIs

There are no known issues.

TRIM

Description

The TRIM function Trims a string by blanks or specified characters. (RedShift SQL Language Reference TRIM function)

In Redshift, it is possible to specify where to perform a trim operation using keywords (BOTH, LEADING, or TRAILING). This functionality can be replicated in Snowflake by using the TRIM, LTRIM, and RTRIM functions, respectively.

Grammar Syntax

 TRIM( [ BOTH | LEADING | TRAILING ] [trim_chars FROM ] string )
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 SELECT TRIM('    dog    ') AS SimpleTrim;

SELECT TRIM(LEADING '"' FROM'"dog"') AS LeadingTrim;

SELECT TRIM(TRAILING '"' FROM'"dog"') AS TrailingTrim;

SELECT TRIM('x' FROM 'xxxHello Worldxxx') AS FromTrim;
Copy

SimpleTrim

dog

LeadingTrim

dog”

TrailingTrim

“dog

FromTrim

Hello World

Output Code:

 SELECT TRIM('    dog    ') AS SimpleTrim;

SELECT
LTRIM('"dog"', '"') AS LeadingTrim;

SELECT
RTRIM('"dog"', '"') AS TrailingTrim;

SELECT
TRIM('xxxHello Worldxxx', 'x') AS FromTrim;
Copy

SimpleTrim

dog

LeadingTrim

dog”

TrailingTrim

“dog

FromTrim

Hello World

Know Issues

No issues were found.

Related EWIs

There are no known issues.

UPPER

Description

The UPPER function converts a string to uppercase. (RedShift SQL Language Reference Upper function)

Grammar Syntax

 UPPER ( string )
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 CREATE TABLE test_upper_function (
    col1 varchar
 );

INSERT INTO test_upper_function
VALUES ('test'),
       ('Test'),
       ('TEST');

SELECT  UPPER(COL1),
        "UPPER"(COL1),
        UPPER('vaLues') 
FROM test_upper_function;
Copy
UPPER(COL1)"UPPER"(COL1)UPPER('vaLues')
TESTTESTVALUES
TESTTESTVALUES
TESTTESTVALUES

Output Code:

 CREATE TABLE test_upper_function (
    col1 varchar
 )
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "11/21/2024",  "domain": "test" }}';

INSERT INTO test_upper_function
VALUES ('test'),
       ('Test'),
       ('TEST');

SELECT  UPPER(COL1),
        UPPER(COL1),
        UPPER('vaLues')
FROM
        test_upper_function;
Copy

UPPER(COL1)

“UPPER”(COL1)

UPPER(‘vaLues’)

TEST

TEST

VALUES

TEST

TEST

VALUES

TEST

TEST

VALUES

Related EWIs

There are no known issues.

Aggregate Functions

AVG

Description

The AVG function returns the average (arithmetic mean) of the input expression values.
(Redshift SQL Language Reference AVG function)

This function is fully supported by Snowflake.

For more information about quoted identifiers in functions, click here.

Grammar Syntax

 AVG ( [ DISTINCT | ALL ] expression )
Copy

Sample Source Patterns

Setup data

 CREATE TABLE example_table (
    my_smallint_column SMALLINT,
    my_integer_column INTEGER,
    my_bigint_column BIGINT,
    my_numeric_column NUMERIC,
    my_decimal_column DECIMAL,
    my_real_column REAL,
    my_double_precision_column DOUBLE PRECISION,
    my_super_column SUPER
);


INSERT INTO example_table (
    my_smallint_column,
    my_integer_column,
    my_bigint_column,
    my_numeric_column,
    my_decimal_column,
    my_real_column,
    my_double_precision_column,
    my_super_column
)
VALUES
    (1, 100, 10000000000, 123.45, 678.90, 3.14, 2.71828, 123),
    (2, 200, 20000000000, 234.56, 789.01, 2.71, 3.14159, 456),
    (3, 300, 30000000000, 345.67, 890.12, 1.41, 1.61803, 789),
    (4, 400, 40000000000, 456.78, 901.23, 1.61, 1.41421, 101112),
    (5, 500, 50000000000, 567.89, 123.45, 2.17, 3.14159, 131415);
Copy
 CREATE TABLE example_table (
    my_smallint_column SMALLINT,
    my_integer_column INTEGER,
    my_bigint_column BIGINT,
    my_numeric_column NUMERIC,
    my_decimal_column DECIMAL,
    my_real_column REAL,
    my_double_precision_column DOUBLE PRECISION,
    my_super_column VARIANT
)
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "11/27/2024",  "domain": "test" }}';


INSERT INTO example_table (
    my_smallint_column,
    my_integer_column,
    my_bigint_column,
    my_numeric_column,
    my_decimal_column,
    my_real_column,
    my_double_precision_column,
    my_super_column
)
VALUES
    (1, 100, 10000000000, 123.45, 678.90, 3.14, 2.71828, 123),
    (2, 200, 20000000000, 234.56, 789.01, 2.71, 3.14159, 456),
    (3, 300, 30000000000, 345.67, 890.12, 1.41, 1.61803, 789),
    (4, 400, 40000000000, 456.78, 901.23, 1.61, 1.41421, 101112),
    (5, 500, 50000000000, 567.89, 123.45, 2.17, 3.14159, 131415);
Copy
Input Code:
 SELECT
    AVG(DISTINCT my_smallint_column) AS type_smallint,
    AVG(ALL my_integer_column) AS type_integer,
    AVG(my_bigint_column) AS type_bigint,
    AVG(my_numeric_column) AS type_numeric,
    AVG(my_decimal_column) AS type_decimal,
    AVG(my_real_column) AS type_real,
    AVG(my_double_precision_column) AS type_double_precision,
    AVG(my_super_column) AS type_super
FROM example_table;
Copy

type_smallint

type_integer

type_bigint

type_numeric

type_decimal

type_real

type_double_precision

type_super

3

300

30000000000

345

676

2.2080000400543214

2.40674

46779

Output Code:
 SELECT
     AVG(DISTINCT my_smallint_column) AS type_smallint,
     AVG(ALL my_integer_column) AS type_integer,
     AVG(my_bigint_column) AS type_bigint,
     AVG(my_numeric_column) AS type_numeric,
     AVG(my_decimal_column) AS type_decimal,
     AVG(my_real_column) AS type_real,
     AVG(my_double_precision_column) AS type_double_precision,
     AVG(my_super_column) AS type_super
FROM example_table;
Copy

type_smallint

type_integer

type_bigint

type_numeric

type_decimal

type_real

type_double_precision

type_super

3.000000

300.000000

30000000000.000000

345.800000

676.400000

2.208

2.40674

46779

Note

AVG, depending on the data type, can behave differently in terms of rounding and formatting, which may result in different precision or decimal places when comparing Redshift with Snowflake.

Related EWIs

There are no known issues.

COUNT

Description

The COUNT function counts the rows defined by the expression.
(Redshift SQL Language Reference COUNT function)

COUNT and APPROXIMATE COUNT are fully supported in Snowflake by COUNT and APPROX_COUNT_DISTINCT.

For more information about quoted identifiers in functions, click here.

Grammar Syntax

 COUNT ( * | expression )

COUNT ( [ DISTINCT | ALL ] expression )

APPROXIMATE COUNT ( DISTINCT expression )
Copy

Sample Source Patterns

Setup data

 CREATE TABLE sales (
    sale_id INT PRIMARY KEY,
    product_id INT,
    sale_amount DECIMAL(10, 2),
    sale_date DATE,
    customer_id INT
);

INSERT INTO sales (sale_id, product_id, sale_amount, sale_date, customer_id) VALUES
(1, 101, 100.00, '2024-01-01', 1001),
(2, 102, 200.00, '2024-01-01', 1002),
(3, 101, 150.00, '2024-01-02', 1001),
(4, 103, 250.00, '2024-01-02', 1003),
(5, 102, 300.00, '2024-01-03', 1002),
(6, 101, 200.00, '2024-01-03', 1004),
(7, 101, 120.00, '2024-01-04', 1001),
(8, 102, 180.00, '2024-01-04', 1005),
(9, 103, 300.00, '2024-01-05', 1003),
(10, 101, 130.00, '2024-01-05', 1006),
(10, 101, 130.00, '2024-01-05', 1006);
Copy
Input Code:
SELECT
    product_id,
    COUNT(sale_id) AS total_sales,
    "COUNT"(DISTINCT sale_id) AS total_sales_distinct,
    "count"(ALL sale_id) AS total_sales_all,
    COUNT(*) AS total_sales_asterisk,
    APPROXIMATE COUNT ( DISTINCT sale_id) AS aprroximate_count_total_sales
FROM
    sales
GROUP BY
    product_id
ORDER BY
    total_sales DESC;
Copy
PRODUCT_IDTOTAL_SALESTOTAL_SALES_DISTINCTTOTAL_SALES_ALLTOTAL_SALES_ASTERISKAPRROXIMATE_COUNT_TOTAL_SALES
10165665
10233333
10322222
Output Code:
 SELECT
    product_id,
    COUNT(sale_id) AS total_sales,
    COUNT(DISTINCT sale_id) AS total_sales_distinct,
    COUNT(ALL sale_id) AS total_sales_all,
    COUNT(*) AS total_sales_asterisk,
    APPROX_COUNT_DISTINCT ( DISTINCT sale_id) AS aprroximate_count_total_sales
FROM
    sales
GROUP BY
    product_id
ORDER BY
    total_sales DESC;
Copy
PRODUCT_IDTOTAL_SALESTOTAL_SALES_DISTINCTTOTAL_SALES_ALLTOTAL_SALES_ASTERISKAPRROXIMATE_COUNT_TOTAL_SALES
10165665
10233333
10322222

Known Issues

No issues were found.

Related EWIs

There are no known issues.

Math Functions

MAX

Description

The MAX function returns the maximum value in a set of rows. (RedShift SQL Language Reference MAX function).

Grammar Syntax

 MAX ( [ DISTINCT | ALL ] expression )
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 CREATE TABLE test_max_function (
    col1 INT
 );

INSERT INTO test_max_function
VALUES ('200'),
       ('20'),
       ('2'),
       ('2');

SELECT MAX(COL1), MAX(DISTINCT COL1), MAX(ALL COL1) FROM test_max_function;
Copy

SUM(COL1)

SUM(DISTINCT COL1)

SUM(ALL COL1)

200

200

200

Output Code:

 CREATE TABLE test_max_function (
    col1 INT
 )
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "11/20/2024",  "domain": "test" }}';

INSERT INTO test_max_function
VALUES ('200'),
       ('20'),
       ('2'),
       ('2');

SELECT MAX(COL1), MAX(DISTINCT COL1), MAX(ALL COL1) FROM
    test_max_function;
Copy

SUM(COL1)

SUM(DISTINCT COL1)

SUM(ALL COL1)

200

200

200

Related EWIs

There are no known issues.

MIN

Description

The MIN function returns the minimum value in a set of rows (RedShift SQL Language Reference MIN function).

Grammar Syntax

 MIN ( [ DISTINCT | ALL ] expression )
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 CREATE TABLE test_min_function (
    col1 INT
 );

INSERT INTO test_min_function
VALUES ('200'),
       ('20'),
       ('2'),
       ('2');

SELECT MIN(COL1), MIN(DISTINCT COL1), MIN(ALL COL1) FROM test_min_function;
Copy

SUM(COL1)

SUM(DISTINCT COL1)

SUM(ALL COL1)

2

2

2

Output Code:

 CREATE TABLE test_min_function (
    col1 INT
 )
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "11/20/2024",  "domain": "test" }}';

INSERT INTO test_min_function
VALUES ('200'),
       ('20'),
       ('2'),
       ('2');

SELECT MIN(COL1), MIN(DISTINCT COL1), MIN(ALL COL1) FROM
    test_min_function;
Copy

SUM(COL1)

SUM(DISTINCT COL1)

SUM(ALL COL1)

2

2

2

Related EWIs

There are no known issues.

NUMERIC

Description

The NUMERIC function is a system function that casts a string value into a numeric value.

This function is fully supported in Snowflake.

Grammar Syntax

 select "numeric"(VARCHAR);
Copy

Sample Source Patterns

Input Code:

 SELECT "numeric"('2024');
Copy

“numeric”

2024

Output Code:

 SELECT
TO_NUMERIC('2024');
Copy

TO_NUMERIC

2024

Related EWIs

There are no known issues.

ROUND

Description

The ROUND function rounds numbers to the nearest integer or decimal. (RedShift SQL Language Reference ROUND function).

Grammar Syntax

 ROUND(number [ , integer ] )
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 SELECT 28.05 AS score, ROUND(28.05) AS round_score;
SELECT 15.53969483712 AS comission, ROUND(15.53969483712, 2) AS round_comission;
Copy

score

round_score

28.05

28

comission

round_comission

15.53969483712

15.54

Output Code:

 SELECT 28.05 AS score, ROUND(28.05) AS round_score;
SELECT 15.53969483712 AS comission, ROUND(15.53969483712, 2) AS round_comission;
Copy

score

round_score

28.05

28

comission

round_comission

15.53969483712

15.54

Related EWIs

There are no known issues.

SUM

Description

The SUM function returns the sum of the input column or expression values. (RedShift SQL Language Reference SUM Function)

Grammar Syntax

 SUM ( [ DISTINCT | ALL ] expression )
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 CREATE TABLE test_sum_function (
    col1 INT
 );

INSERT INTO test_sum_function
VALUES ('200'),
       ('20'),
       ('2'),
       ('2');

SELECT SUM(COL1), SUM(DISTINCT COL1), SUM(ALL COL1) FROM test_sum_function;
Copy

SUM(COL1)

SUM(DISTINCT COL1)

SUM(ALL COL1)

224

222

224

Output Code:

 CREATE TABLE test_sum_function (
    col1 INT
 )
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "11/20/2024",  "domain": "test" }}';

INSERT INTO test_sum_function
VALUES ('200'),
       ('20'),
       ('2'),
       ('2');

SELECT SUM(COL1), SUM(DISTINCT COL1), SUM(ALL COL1) FROM
    test_sum_function;
Copy

SUM(COL1)

SUM(DISTINCT COL1)

SUM(ALL COL1)

224

222

224

Related EWIs

There are no known issues.

TRUNC

Description

The TRUNC function truncates numbers to the previous integer or decimal.

For more information, please refer to TRUNC function.

Grammar Syntax

 TRUNC(number [ , integer ])
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 CREATE TABLE test_trunc_function (
    col1 INT,
    col2 FLOAT
 );

INSERT INTO test_trunc_function
VALUES ('200','111.13'),
       ('20','111.133444'),
       ('2','111.1350');

SELECT TRUNC(col1, -1), TRUNC(col2, -1) FROM test_trunc_function;
Copy

TRUNC(COL1, -1)

TRUNC(COL2, -1)

200

110

20

110

0

110

Output Code:

 CREATE TABLE test_trunc_function (
    col1 INT,
    col2 FLOAT
 )
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "09/30/2024" }}';

INSERT INTO test_trunc_function
VALUES ('200','111.13'),
       ('20','111.133444'),
       ('2','111.1350');

SELECT TRUNC(col1, -1), TRUNC(col2, -1) FROM test_trunc_function;
Copy

TRUNC(COL1, -1)

TRUNC(COL2, -1)

200

110

20

110

0

110

Related EWIs

There are no known issues.

Data Type Formatting Functions

TO_DATE

Description

TO_DATE converts a date represented by a character string to a DATE data type. (Redshift SQL Language Reference TO_DATE function)

This function is fully supported in Snowflake.

For more information about quoted identifiers in functions, click here.

Grammar Syntax

 TO_DATE(string, format, [is_strict])
Copy

Sample Source Patterns

Input Code:

 SELECT TO_DATE('02 Oct 2001', 'DD Mon YYYY');
SELECT TO_DATE('20010631', 'YYYYMMDD', FALSE);
SELECT TO_DATE('20010631', 'YYYYMMDD', TRUE);
SELECT TO_DATE('1,993 12 23', 'Y,YYY MM DD');
SELECT TO_DATE(d, 'YYYY/MM/DD'),
       TO_DATE(d, f)
       FROM (SELECT '2001-01-01'::date as d, 'DD/MM/YYYY' as f);
Copy

to_date

to_date

to_date

2001-10-02

2001-07-01

[22008] ERROR: date/time field date value out of range: 2001-6-31

Output Code:
 SELECT TO_DATE('02 Oct 2001', 'DD Mon YYYY');
SELECT
       TRY_TO_DATE(/*** SSC-FDM-RS0004 - INVALID DATES WILL CAUSE ERRORS IN SNOWFLAKE ***/ '20010631', 'YYYYMMDD');
SELECT TO_DATE('20010631', 'YYYYMMDD');
SELECT TO_DATE('1,993 12 23', 'Y,YYY MM DD') !!!RESOLVE EWI!!! /*** SSC-EWI-PG0005 - Y,YYY MM DD FORMAT MAY FAIL OR MAY HAVE A DIFFERENT BEHAVIOR IN SNOWFLAKE. ***/!!!;
SELECT TO_DATE(d, 'YYYY/MM/DD'),
       --** SSC-FDM-0032 - PARAMETER 'format_string' IS NOT A LITERAL VALUE, TRANSFORMATION COULD NOT BE FULLY APPLIED **
       TO_DATE(d, f)
       FROM (SELECT '2001-01-01'::date as d, 'DD/MM/YYYY' as f);
Copy

to_date

to_date

to_date

2001-10-02

NULL

Can’t parse ‘20010631’ as date with format ‘YYYYMMDD’

Known Issues

  1. The query SELECT TO_DATE('20010631', 'YYYYMMDD') fails in Snowflake because June has only 30 days, and Snowflake’s TO_DATE does not adjust invalid dates automatically, unlike Redshift’s TO_DATE with is_strict set to false, which would convert it to July 1. To avoid errors with invalid date strings, you can use TRY_TO_DATE, which returns NULL if the conversion fails. This allows for smoother query execution and easier identification of invalid dates.

Related EWIs

  • SSC-FDM-RS0004: Invalid dates will cause errors in Snowflake.

  • SSC-EWI-PG0005: Date or time format is not supported in Snowflake.

  • SSC-FDM-0032: Parameter is not a literal value, transformation could not be fully applied

TO_CHAR

Description

TO_CHAR converts a timestamp or numeric expression to a character-string data format. (Redshift SQL Language Reference TO_CHAR function)

Warning

This function is partially supported in Snowflake.

For more information about quoted identifiers in functions, click here.

Grammar Syntax

 TO_CHAR(timestamp_expression | numeric_expression , 'format')
Copy

Sample Source Patterns

Input Code:

 SELECT TO_CHAR(timestamp '2009-12-31 23:15:59', 'YYYY'),
       TO_CHAR(timestamp '2009-12-31 23:15:59', 'YYY'),
       TO_CHAR(timestamp '2009-12-31 23:15:59', 'TH'),
       "to_char"(timestamp '2009-12-31 23:15:59', 'MON-DY-DD-YYYY HH12:MIPM'),
       TO_CHAR(125.8, '999.99'),
       "to_char"(125.8, '999.99');
Copy

TO_CHAR

TO_CHAR

TO_CHAR

TO_CHAR

TO_CHAR

2009

009

DEC-THU-31-2009 11:15PM

125.80

125.80

Output Code:
 SELECT
       TO_CHAR(timestamp '2009-12-31 23:15:59', 'YYYY'),
       PUBLIC.YEAR_PART_UDF(timestamp '2009-12-31 23:15:59', 3),
       TO_CHAR(timestamp '2009-12-31 23:15:59', 'TH') !!!RESOLVE EWI!!! /*** SSC-EWI-PG0005 - TH FORMAT MAY FAIL OR MAY HAVE A DIFFERENT BEHAVIOR IN SNOWFLAKE. ***/!!!,
       PUBLIC.MONTH_SHORT_UDF(timestamp '2009-12-31 23:15:59', 'uppercase') || '-' || PUBLIC.DAYNAME_SHORT_UDF(timestamp '2009-12-31 23:15:59', 'uppercase') || TO_CHAR(timestamp '2009-12-31 23:15:59', '-DD-YYYY HH12:MI') || PUBLIC.MERIDIAN_INDICATORS_UDF(timestamp '2009-12-31 23:15:59', 'uppercase'),
       TO_CHAR(125.8, '999.99'),
       TO_CHAR(125.8, '999.99');
Copy

TO_CHAR

TO_CHAR

2009

Dec-Thu-31-2009 11:15PM

Known Issues

No issues were found.

Related EWIs

  • SSC-EWI-PG0005: The current date/numeric format may have a different behavior in Snowflake.

For datetime values

Description

The following format strings apply to functions such as TO_CHAR. These strings can contain datetime separators (such as ‘-’, ‘/’, or ‘:’) and the following “dateparts” and “timeparts”. (Redshift Datetime format strings reference page)

Grammar Syntax

TO_CHAR (timestamp_expression, 'format')

Copy

The following table specifies the mapping of each format element to Snowflake:

Redshift

Snowflake

Notes

BC, AD, bc, ad (upper and lowercase era indicators)

PUBLIC.ERA_INDICATORS_UDF

B.C,. A.D., b.c., a.d. (upper and lowercase era indicators with points)

PUBLIC.ERA_INDICATORS_WITH_POINTS_UDF

CC

PUBLIC.CENTURY_UDF

YYYY and YY

-

Directly supported

YYY and Y

PUBLIC.YEAR_PART_UDF

Y,YYY

PUBLIC.YEAR_WITH_COMMA_UDF

IYYY

YEAROFWEEKISO

I, IY, IYY

PUBLIC.ISO_YEAR_PART_UDF

Q

QUARTER

MONTH, Month, month

PUBLIC.FULL_MONTH_NAME_UDF

MON, Mon, mon

PUBLIC.MONTH_SHORT_UDF

RM, rm

PUBLIC.ROMAN_NUMERALS_MONTH_UDF

W

PUBLIC.WEEK_OF_MONTH_UDF

WW

PUBLIC.WEEK_NUMBER_UDF

IW

WEEKISO

DAY, Day, day

PUBLIC.DAYNAME_LONG_UDF

DY, Dy, dy

PUBLIC.DAYNAME_SHORT_UDF

DDD

DAYOFYEAR

IDDD

PUBLIC.DAY_OF_YEAR_ISO_UDF

D

PUBLIC.DAY_OF_WEEK_UDF

For this UDF to work correctly the Snowflake session parameter WEEK_START should have its default value (0)

ID

DAYOFWEEKISO

J

PUBLIC.JULIAN_DAY_UDF

HH24

-

Directly supported

HH

HH12

HH12

-

Directly supported

MI

-

Directly supported

SS

-

Directly supported

MS

FF3

US

FF6

AM, PM, am, pm (upper and lowercase meridian indicators)

PUBLIC.MERIDIAN_INDICATORS_UDF

A.M., P.M., a.m., p.m. (upper and lowercase meridian indicators with points)

PUBLIC.MERIDIAN_INDICATORS_WITH_POINTS_UDF

TZ and tz

UTC and utc

According to the redshift documentation, all timestamp with time zone are stored in UTC, which causes this format element to return a fixed result

OF

+00

According to the redshift documentation, all timestamp with time zone are stored in UTC, which causes this format element to return a fixed result

SSSS

PUBLIC.SECONDS_PAST_MIDNIGHT

SP

This is a PostgreSQL template pattern modifier for “spell mode”, however it does nothing on Redshift, so it is removed from the output.

FX

This is another template pattern modifier for “fixed format”, however it has no use on the TO_CHAR function so it is removed.

Sample Source Patterns

Direct format elements transformation (no functions/UDFs)

The result is preserved as a single TO_CHAR function

Redshift
 SELECT TO_CHAR('2013-10-03 13:50:15.456871'::TIMESTAMP, 'DD/MM/YY HH:MI:SS.MS') AS col1;
Copy
+----------------------+
|col1                  |
+----------------------+
|03/10/13 01:50:15.456 |
+----------------------+

Copy
Snowflake
 SELECT TO_CHAR('2013-10-03 13:50:15.456871'::TIMESTAMP, 'DD/MM/YY HH12:MI:SS.FF3') AS col1;
Copy
+----------------------+
|col1                  |
+----------------------+
|03/10/13 01:50:15.456 |
+----------------------+

Copy

Format transformation using functions/UDFs

The result is a concatenation of multiple TO_CHAR, UDFs and Snowflake built-in functions that generate the equivalent string representation of the datetime value

Redshift
 SELECT TO_CHAR(DATE '2025-07-05', '"Today is " Month DAY DD, "it belongs to the week " IW') AS result;
Copy
+-------------------------------------------------------------+
|result                                                       |
+-------------------------------------------------------------+
|Today is  July      SATURDAY  05, it belongs to the week  27 |
+-------------------------------------------------------------+

Copy
Snowflake
 SELECT
    'Today is ' ||
    TO_CHAR(DATE '2025-07-05', ' ') ||
    PUBLIC.FULL_MONTH_NAME_UDF(DATE '2025-07-05', 'firstOnly') ||
    ' ' ||
    PUBLIC.DAYNAME_LONG_UDF(DATE '2025-07-05', 'uppercase') ||
    TO_CHAR(DATE '2025-07-05', ' DD, ') ||
    'it belongs to the week ' ||
    TO_CHAR(DATE '2025-07-05', ' ') ||
    WEEKISO(DATE '2025-07-05') AS result;
Copy
+-------------------------------------------------------------+
|result                                                       |
+-------------------------------------------------------------+
|Today is  July      SATURDAY  05, it belongs to the week  27 |
+-------------------------------------------------------------+

Copy

Quoted text

Format elements in double quoted text are added to the output directly without interpreting them, escaped double quotes are transformed to their Snowflake escaped equivalent.

Redshift
 SELECT
    TO_CHAR(DATE '2025-01-16', 'MM "TESTING DD" DD') AS result1,
    TO_CHAR(DATE '2025-01-16', 'MM TESTING \\"DD\\" DD') AS result2,
    TO_CHAR(DATE '2025-01-16', 'MM "TESTING \\"DD\\"" DD') AS result3;
Copy
+-----------------+-------------------+-------------------+
|result1          |result2            |result3            |
+-----------------+-------------------+-------------------+
|01 TESTING DD 16 |01 TEST5NG "16" 16 |01 TESTING "DD" 16 |
+-----------------+-------------------+-------------------+

Copy
Snowflake
 SELECT
    TO_CHAR(DATE '2025-01-16', 'MM ') || 'TESTING DD' || TO_CHAR(DATE '2025-01-16', ' DD') AS result1,
    TO_CHAR(DATE '2025-01-16', 'MM TEST') || PUBLIC.ISO_YEAR_PART_UDF(DATE '2025-01-16', 1) || TO_CHAR(DATE '2025-01-16', 'NG ""DD"" DD') AS result2,
    TO_CHAR(DATE '2025-01-16', 'MM ') || 'TESTING "DD"' || TO_CHAR(DATE '2025-01-16', ' DD') AS result3;
Copy
+-----------------+-------------------+-------------------+
|result1          |result2            |result3            |
+-----------------+-------------------+-------------------+
|01 TESTING DD 16 |01 TEST5NG "16" 16 |01 TESTING "DD" 16 |
+-----------------+-------------------+-------------------+

Copy

Known Issues

Template pattern modifiers not supported

The following format template modifiers:

  • FM (fill mode)

  • TH and th (uppercase and lowercase ordinal number suffix)

  • TM (translation mode)

Are not supported, including them in a format will generate SSC-EWI-PG0005

Input code:

 SELECT TO_CHAR(CURRENT_DATE, 'FMMonth'),
TO_CHAR(CURRENT_DATE, 'DDTH'),
TO_CHAR(CURRENT_DATE, 'DDth'),
TO_CHAR(CURRENT_DATE, 'TMMonth');
Copy

Output code:

 SELECT
TO_CHAR(CURRENT_DATE(), 'FM') || PUBLIC.FULL_MONTH_NAME_UDF(CURRENT_DATE(), 'firstOnly') !!!RESOLVE EWI!!! /*** SSC-EWI-PG0005 - FMMonth FORMAT MAY FAIL OR MAY HAVE A DIFFERENT BEHAVIOR IN SNOWFLAKE. ***/!!!,
TO_CHAR(CURRENT_DATE(), 'DDTH') !!!RESOLVE EWI!!! /*** SSC-EWI-PG0005 - DDTH FORMAT MAY FAIL OR MAY HAVE A DIFFERENT BEHAVIOR IN SNOWFLAKE. ***/!!!,
TO_CHAR(CURRENT_DATE(), 'DDth') !!!RESOLVE EWI!!! /*** SSC-EWI-PG0005 - DDth FORMAT MAY FAIL OR MAY HAVE A DIFFERENT BEHAVIOR IN SNOWFLAKE. ***/!!!,
TO_CHAR(CURRENT_DATE(), 'TM') || PUBLIC.FULL_MONTH_NAME_UDF(CURRENT_DATE(), 'firstOnly') !!!RESOLVE EWI!!! /*** SSC-EWI-PG0005 - TMMonth FORMAT MAY FAIL OR MAY HAVE A DIFFERENT BEHAVIOR IN SNOWFLAKE. ***/!!!;
Copy

Format parameter passed through variable

When the format parameter is passed as a variable instead of a string literal, the transformation of format elements can not be applied, an FDM will be added to the uses of the function warning about it.

Input code:

 SELECT TO_CHAR(d, 'YYYY/MM/DD'),
TO_CHAR(d, f)
FROM (SELECT TO_DATE('2001-01-01','YYYY-MM-DD') as d, 'DD/MM/YYYY' as f);
Copy

Output code:

 SELECT TO_CHAR(d, 'YYYY/MM/DD'),
--** SSC-FDM-0032 - PARAMETER 'format_string' IS NOT A LITERAL VALUE, TRANSFORMATION COULD NOT BE FULLY APPLIED **
TO_CHAR(d, f)
FROM (SELECT TO_DATE('2001-01-01','YYYY-MM-DD') as d, 'DD/MM/YYYY' as f);
Copy

Related EWIs

  1. SSC-EWI-PG0005: The current date/numeric format may have a different behavior in Snowflake.

  2. SSC-FDM-0032: Parameter is not a literal value, transformation could not be fully applied

JSON Functions

JSON_EXTRACT_PATH_TEXT

Description

The JSON_EXTRACT_PATH_TEXT function returns the value for the key-value pair referenced by a series of path elements in a JSON string.. (RedShift SQL Language Reference JSON_EXTRACT_PATH_TEXT Function)

Grammar Syntax

 JSON_EXTRACT_PATH_TEXT('json_string', 'path_elem' [,'path_elem'[, ] ]              [, null_if_invalid ] )
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 SELECT
'{
  "house": {
    "address": {
      "street": "123 Any St.",
      "city": "Any Town",
      "state": "FL",
      "zip": "32830"
    },
    "bathroom": {
      "color": "green",
      "shower": true
    },
    "appliances": {
      "washing machine": {
        "brand": "Any Brand",
        "color": "beige"
      },
      "dryer": {
        "brand": "Any Brand",
        "color": "white"
      }
    }
  }
}' as VALID_JSON,
'notvalidjson' as INVALID_JSON,
JSON_EXTRACT_PATH_TEXT(VALID_JSON, 'house', 'appliances', 'washing machine', 'brand') AS VALID_JSON_FLAG_DEFAULT_OFF,
JSON_EXTRACT_PATH_TEXT(VALID_JSON, 'house', 'appliances', 'washing machine', 'brand', false) AS VALID_JSON_FLAG_OFF,
JSON_EXTRACT_PATH_TEXT(VALID_JSON, 'house', 'appliances', 'washing machine', 'brand', true) AS VALID_JSON_FLAG_TRUE,
JSON_EXTRACT_PATH_TEXT(INVALID_JSON, 'house', 'appliances', 'washing machine', 'brand', true) AS INVALID_JSON_FLAG_TRUE,
JSON_EXTRACT_PATH_TEXT(INVALID_JSON, 'house', 'appliances', 'washing machine', 'brand', false) AS INVALID_JSON_FLAG_FALSE
;
Copy

VALID_JSON

INVALID_JSON

VALID_JSON_FLAG_DEFAULT_OFF

VALID_JSON_FLAG_OFF

VALID_JSON_FLAG_TRUE

INVALID_JSON_FLAG_TRUE

-

notvalidjson

Any Brand

Any Brand

Any Brand

NULL

Output Code:

 SELECT
'{
  "house": {
    "address": {
      "street": "123 Any St.",
      "city": "Any Town",
      "state": "FL",
      "zip": "32830"
    },
    "bathroom": {
      "color": "green",
      "shower": true
    },
    "appliances": {
      "washing machine": {
        "brand": "Any Brand",
        "color": "beige"
      },
      "dryer": {
        "brand": "Any Brand",
        "color": "white"
      }
    }
  }
}' as VALID_JSON,
'notvalidjson' as INVALID_JSON,
JSON_EXTRACT_PATH_TEXT(VALID_JSON, ARRAY_TO_STRING(['house', 'appliances', '"washing machine"', 'brand'], '.')) AS VALID_JSON_FLAG_DEFAULT_OFF,
JSON_EXTRACT_PATH_TEXT(VALID_JSON, ARRAY_TO_STRING(['house', 'appliances', '"washing machine"', 'brand'], '.')) AS VALID_JSON_FLAG_OFF,
JSON_EXTRACT_PATH_TEXT(TRY_PARSE_JSON(VALID_JSON), ARRAY_TO_STRING(['house', 'appliances', '"washing machine"', 'brand'], '.')) AS VALID_JSON_FLAG_TRUE,
JSON_EXTRACT_PATH_TEXT(TRY_PARSE_JSON(INVALID_JSON), ARRAY_TO_STRING(['house', 'appliances', '"washing machine"', 'brand'], '.')) AS INVALID_JSON_FLAG_TRUE,
JSON_EXTRACT_PATH_TEXT(INVALID_JSON, ARRAY_TO_STRING(['house', 'appliances', '"washing machine"', 'brand'], '.')) AS INVALID_JSON_FLAG_FALSE
;
Copy

VALID_JSON

INVALID_JSON

VALID_JSON_FLAG_DEFAULT_OFF

VALID_JSON_FLAG_OFF

VALID_JSON_FLAG_TRUE

INVALID_JSON_FLAG_TRUE

-

notvalidjson

Any Brand

Any Brand

Any Brand

NULL

Input Code using Variables as paths:
 SELECT
  'appliances' level_2,
  'brand' level_4,
  JSON_EXTRACT_PATH_TEXT(
    INFO.VALID_JSON,
    'house',
    level_2,
    'washing machine',
    level_4
  ) result
FROM
  (
    SELECT
      '{
  "house": {
    "address": {
      "street": "123 Any St.",
      "city": "Any Town",
      "state": "FL",
      "zip": "32830"
    },
    "bathroom": {
      "color": "green",
      "shower": true
    },
    "appliances": {
      "washing machine": {
        "brand": "Any Brand",
        "color": "beige"
      },
      "dryer": {
        "brand": "Any Brand",
        "color": "white"
      }
    }
  }
}' AS VALID_JSON
  ) INFO;
Copy

level_2

level_4

result

appliances

brand

Any Brand

Output Code:

 SELECT
  'appliances' level_2,
  'brand' level_4,
  JSON_EXTRACT_PATH_TEXT(
  INFO.VALID_JSON, ARRAY_TO_STRING(['house',
  level_2, '"washing machine"',
  level_4], '.')) result
FROM
  (
    SELECT
      '{
  "house": {
    "address": {
      "street": "123 Any St.",
      "city": "Any Town",
      "state": "FL",
      "zip": "32830"
    },
    "bathroom": {
      "color": "green",
      "shower": true
    },
    "appliances": {
      "washing machine": {
        "brand": "Any Brand",
        "color": "beige"
      },
      "dryer": {
        "brand": "Any Brand",
        "color": "white"
      }
    }
  }
}' AS VALID_JSON
  ) INFO;
Copy

level_2

level_4

result

appliances

brand

Any Brand

Known Issues

  • Redshift treats different characters when they are newline, tab, and carriage compared to Snowflake. Redshift interprets the characters as the characters themselves. Snowflake applies it.

  • Snowflake function receives two parameters: the first one is the JSON Literal, and the second one is the path separated by a dot to access the inner objects. The transformation replaces the list of paths into a function that will join the paths separated by dots, even if it is a column reference.

  • When the path is sent through a variable, and the variable contains spaces, it should be quoted.

Related EWIs

No issues were found.

Hash Functions

FNV_HASH

Description

FNV_HASH Computes the 64-bit FNV-1a non-cryptographic hash function for all basic data types.
(Redshift SQL Language Reference FNV_HASH function).

Note

In Snowflake there is no equivalent function that provide the FNV algorithm, but HASH function provides the same non-cryptographic functionality.

Grammar Syntax

 FNV_HASH(value [, seed])
Copy

Sample Source Patterns

Input Code:

 SELECT FNV_HASH('John Doe') as FNV_HASH,
       FNV_HASH('John Doe', 3) as FNV_HASH_SEED;
Copy

FNV_HASH

FNV_HASH_SEED

-1568545727084176168

-5484851035903916490

Output Code:
 SELECT
       HASH('John Doe') as FNV_HASH,
       HASH('John Doe') as FNV_HASH_SEED;
Copy

FNV_HASH

FNV_HASH_SEED

3199932455444588441

3199932455444588441

Note

In Redshift’s FNV_HASH function, the seed parameter is optional, but it is not used in the Snowflake’s Hash function since the seed parameter is only used on FNV algorithm.

Known Issues

There are no known issues.

Related EWIs

There are no known issues.

Conditional Functions

COALESCE

Description

Returns the value of the first expression that isn’t null in a series of expressions. When a non-null value is found, the remaining expressions in the list aren’t evaluated.

For more information, please refer to COALESCE function.

Grammar Syntax

 COALESCE( expression, expression, ... )
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 CREATE TABLE suppliers (
  supplier_id INT PRIMARY KEY,
  supplier_name VARCHAR(30),
  phone_region_1 VARCHAR(15),
  phone_region_2 VARCHAR(15));

INSERT INTO suppliers(supplier_id, supplier_name, phone_region_1, phone_region_2)
  VALUES(1, 'Company_ABC', NULL, '555-01111'),
        (2, 'Company_DEF', '555-01222', NULL),
        (3, 'Company_HIJ', '555-01333', '555-01444'),
        (4, 'Company_KLM', NULL, NULL); 

SELECT COALESCE(phone_region_1, phone_region_2) IF_REGION_1_NULL,
       COALESCE(phone_region_2, phone_region_1) IF_REGION_2_NULL
  FROM suppliers
  ORDER BY supplier_id;
Copy

IF_REGION_1_NULL

IF_REGION_2_NULL

555-01111

555-01111

555-01222

555-01222

555-01333

555-01444

null

null

Output Code:

 CREATE TABLE suppliers (
  supplier_id INT PRIMARY KEY,
  supplier_name VARCHAR(30),
  phone_region_1 VARCHAR(15),
  phone_region_2 VARCHAR(15))
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "09/27/2024" }}';

INSERT INTO suppliers (supplier_id, supplier_name, phone_region_1, phone_region_2)
  VALUES(1, 'Company_ABC', NULL, '555-01111'),
        (2, 'Company_DEF', '555-01222', NULL),
        (3, 'Company_HIJ', '555-01333', '555-01444'),
        (4, 'Company_KLM', NULL, NULL);

SELECT COALESCE(phone_region_1, phone_region_2) IF_REGION_1_NULL,
       COALESCE(phone_region_2, phone_region_1) IF_REGION_2_NULL
  FROM
       suppliers
  ORDER BY supplier_id;
Copy

IF_REGION_1_NULL

IF_REGION_2_NULL

555-01111

555-01111

555-01222

555-01222

555-01333

555-01444

null

null

Related EWIs

There are no known issues.

GREATEST and LEAST

Description

Returns the largest or smallest value from a list of any number of expressions. (Redshift SQL Language Reference GREATEST and LEAST function)

Grammar Syntax

 GREATEST (value [, ...])
LEAST (value [, ...])
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 SELECT GREATEST(10, 20, NULL, 40, 30) AS greatest, LEAST(10, 20, NULL, 40, 30) AS least, GREATEST(NULL, NULL, NULL) AS nulls;

SELECT LEAST('GERMANY', 'USA') AS LEAST, GREATEST('GERMANY', 'USA') AS GREATEST;
Copy

greatest

least

nulls

40

10

25

least

greatest

GERMANY

USA

Output Code:

 SELECT
GREATEST_IGNORE_NULLS(10, 20, NULL, 40, 30) AS greatest,
LEAST_IGNORE_NULLS(10, 20, NULL, 40, 30) AS least,
GREATEST_IGNORE_NULLS(NULL, NULL, NULL) AS nulls;

SELECT
LEAST_IGNORE_NULLS(RTRIM('GERMANY'), RTRIM('USA')) AS LEAST,
GREATEST_IGNORE_NULLS(RTRIM('GERMANY'), RTRIM('USA')) AS GREATEST;
Copy

greatest

least

nulls

40

10

NULL

least

greatest

GERMANY

USA

Known Issues

There are no know issues

Related EWIs

NULLIF

Description

The NULLIF expression compares two arguments and returns null if the arguments are equal. If they are not equal, the first argument is returned. In Redshift, NULLIF ignores trailing blanks when comparing string values in certain scenarios. As a result, the behavior may differ between Redshift and Snowflake. For this reason, in the transformation RTRIM will add for get the equivalence between Redshift and Snowflake (Redshift SQL Language Reference NULLIF function)

Grammar Syntax

 NULLIF ( expression1, expression2 )
Copy

This function is fully supported by Snowflake.

Sample Source Patterns

Input Code:

 SELECT NULLIF('first', 'second') AS different, NULLIF('first', 'first') AS same;
Copy

different

same

first

NULL

Output Code:

 SELECT NULLIF(RTRIM('first'), RTRIM('second')) AS different, NULLIF(RTRIM('first'), RTRIM('first')) AS same;
Copy

different

same

first

NULL

Known Issues

There are no known issues.

Related EWIs

There are no known EWIs.

NVL

Description

Returns the value of the first expression that isn’t null in a series of expressions. When a non-null value is found, the remaining expressions in the list aren’t evaluated. (Redshift SQL Language Reference NVL function)

In RedShift, NVL can contain 2 or more arguments, while in Snowflake NVL function only accepts 2 arguments. In order to emulate the same behavior, NVL with 3 or more arguments is transformed to COALESCE function.

This function is fully supported by Snowflake.

Grammar Syntax

 NVL( expression, expression, ... )
Copy

Sample Source Patterns

Input Code:

 CREATE TABLE suppliers (
  supplier_id INT PRIMARY KEY,
  supplier_name VARCHAR(30),
  phone_region_1 VARCHAR(15),
  phone_region_2 VARCHAR(15),
  phone_region_3 VARCHAR(15));

INSERT INTO suppliers(supplier_id, supplier_name, phone_region_1, phone_region_2, phone_region_3)
  VALUES(1, 'Company_ABC', NULL, '555-01111', NULL),
        (2, 'Company_DEF', '555-01222', NULL, NULL),
        (3, 'Company_HIJ', '555-01333', '555-01444', NULL),
        (4, 'Company_KLM', NULL, NULL, NULL); 

SELECT NVL(phone_region_1, phone_region_2) IF_REGION_1_NULL,
       NVL(phone_region_2, phone_region_1) IF_REGION_2_NULL,
       NVL(phone_region_2, phone_region_1, phone_region_3) THREE_ARG_NVL
  FROM suppliers
  ORDER BY supplier_id;
Copy

IF_REGION_1_NULL

IF_REGION_2_NULL

IF_REGION_3_NULL

555-01111

555-01111

555-01111

555-01222

555-01222

555-01222

555-01333

555-01444

555-01444

null

null

null

Output Code:

 CREATE TABLE suppliers (
  supplier_id INT PRIMARY KEY,
  supplier_name VARCHAR(30),
  phone_region_1 VARCHAR(15),
  phone_region_2 VARCHAR(15),
  phone_region_3 VARCHAR(15))
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "11/26/2024",  "domain": "test" }}';

INSERT INTO suppliers (supplier_id, supplier_name, phone_region_1, phone_region_2, phone_region_3)
  VALUES(1, 'Company_ABC', NULL, '555-01111', NULL),
        (2, 'Company_DEF', '555-01222', NULL, NULL),
        (3, 'Company_HIJ', '555-01333', '555-01444', NULL),
        (4, 'Company_KLM', NULL, NULL, NULL);

SELECT NVL(phone_region_1, phone_region_2) IF_REGION_1_NULL,
       NVL(phone_region_2, phone_region_1) IF_REGION_2_NULL,
       COALESCE(phone_region_2, phone_region_1, phone_region_3) THREE_ARG_NVL
  FROM
       suppliers
  ORDER BY supplier_id;
Copy

IF_REGION_1_NULL

IF_REGION_2_NULL

555-01111

555-01111

555-01222

555-01222

555-01333

555-01444

null

null

Known Issues

There are no known issues.

Related EWIs

There are no known issues.

Date and Time Functions

CONVERT_TIMEZONE

Description

The CONVERT_TIMEZONE function in Amazon Redshift converts a timestamp from one time zone to another, where you specify the original time zone, the target time zone, and the timestamp to convert. Making it useful for managing time accurately across different regions

Grammar Syntax

 CONVERT_TIMEZONE(source_timezone, target_timezone, timestamp)
Copy

Sample Source Patterns

Input Code:

 SELECT
    GETDATE(),
    CONVERT_TIMEZONE('UTC', 'Europe/London', GETDATE()) AS london_time,
    CONVERT_TIMEZONE('UTC', 'Asia/Tokyo', GETDATE()) AS tokyo_time;
Copy

getdate

london_time

tokyo_time

2024-10-02 16:14:55.000000

2024-10-02 17:14:55.000000

2024-10-03 01:14:55.000000

Output Code:
 SELECT
    GETDATE(),
    CONVERT_TIMEZONE('UTC', 'Europe/London', GETDATE()) AS london_time,
    CONVERT_TIMEZONE('UTC', 'Asia/Tokyo', GETDATE()) AS tokyo_time;
Copy

getdate

london_time

tokyo_time

2024-10-02 09:18:43.848

2024-10-02 10:17:35.351

2024-10-02 18:17:35.351

Related EWIs

There are no known issues.

CURRENT_DATE

Description

CURRENT_DATE returns a date in the current session time zone (UTC by default) in the default format: YYYY-MM-DD. (RedShift SQL Language Reference CURRENT_DATE)

Grammar Syntax

 CURRENT_DATE
Copy

Sample Source Patterns

Input Code:

 SELECT CURRENT_DATE;
Copy

DATE

2024-09-22

Output Code:
 SELECT
    CURRENT_DATE();
Copy

DATE_T

2024-09-22

Recommendations

There are no recommendations.

Related EWIs

There are no known issues.

DATE

Description

This function converts an input expression to a date.

This function is fully supported in Snowflake.

For more information about quoted identifiers in functions, click here.

Grammar Syntax

 DATE(<expr>)
Copy

Sample Source Patterns

Input Code:

 SELECT DATE('2024-02-02 04:05:06.789');

SELECT "DATE"("GETDATE"());

SELECT "date"('2024-02-02 04:05:06.789');
Copy

DATE(‘2024-02-02 04:05:06.789’)

2024-02-02

“DATE”(“GETDATE”())

2024-11-20

“date”(‘2024-02-02 04:05:06.789’)

2024-02-02

Output Code:
 SELECT DATE('2024-02-02 04:05:06.789');

SELECT DATE(GETDATE());

SELECT DATE('2024-02-02 04:05:06.789');
Copy

DATE(‘2024-02-02 04:05:06.789’)

2024-02-02

“DATE”(“GETDATE”())

2024-11-20

“DATE”(‘2024-02-02 04:05:06.789’)

2024-02-02

Known Issues

No issues were found.

Related EWIs

There are no known issues.

DATE_ADD

Description

The DATE_ADD function in Amazon Redshift adds a specified time interval to a date or timestamp. Where datetimepart is the type of interval (like 'day' or 'month'), interval is the number of units to add (positive or negative), and timestamp is the original date.

DATE_ADD is converted to DATEADD, for more information about this conversion click here.

Valid datepart in Snowflake for DATE_ADD:

  • microsecond, microseconds

  • millisecond, milliseconds

  • second, seconds

  • minute, minutes

  • hour, hours

  • day, days

  • week

  • month, months

  • quarter, quarters

  • year, years

Unsupported formats in Snowflake for DATE_ADD:

  • weeks

  • decade, decades

  • century, centuries

  • millennium, millennia

Grammar Syntax

 DATE_ADD( datetimepart, interval, timestamp )
Copy

Sample Source Patterns

Input Code:

 SELECT DATE_ADD('day', 10, '2024-11-01'), DATE_ADD('month', 3, '2024-11-01');

SELECT "DATE_ADD"('DAYS',18,'2008-02-28');
Copy

date_add

date_add

2024-11-11 00:00:00.000000

2025-02-01 00:00:00.000000

“date_add”

2008-03-17 00:00:00.000000

Output Code:
 SELECT
DATEADD('day', 10, '2024-11-01'),
DATEADD('month', 3, '2024-11-01');

SELECT
DATEADD('DAYS',18,'2008-02-28');
Copy

DATEADD

DATEADD

2024-11-11 00:00:00.000

2025-02-01 00:00:00.000

DATEADD

2008-03-17 00:00:00.000

Know Issues

No issues were found.

Related EWIs

There are no known issues.

DATE_DIFF

Description

DATEDIFF returns the difference between the date parts of two date or time expressions.

(RedShift SQL Language Reference DATEDIFF function)

Valid datepart in Snowflake for DATE_DIFF:

  • microsecond, microseconds

  • millisecond, milliseconds

  • second, seconds

  • minute, minutes

  • hour, hours

  • day, days

  • week

  • month, months

  • quarter, quarters

  • year, years

Unsupported formats in Snowflake for DATEDIFF:

  • decade, decades

  • century, centuries

  • millennium, millennia

Grammar Syntax

 DATE_DIFF( datepart, {date|time|timetz|timestamp}, {date|time|timetz|timestamp} )
Copy

This function is fully supported in Snowflake.

Sample Source Patterns

Input Code:

 SELECT DATE_DIFF(year,'2009-01-01','2019-12-31') as year,
    DATE_DIFF(month,'2009-01-01','2019-12-31') as month,
    DATE_DIFF(day,'2009-01-01','2019-12-31') as day,
    date_diff('year'::text, '2009-01-01 00:00:00'::timestamp without time zone, '2019-12-31 00:00:00'::timestamp without time zone) AS "year2";
    

SELECT DATE_DIFF(week,'2009-01-01','2019-12-31') as week,
    DATE_DIFF(century,'1009-01-01','2009-12-31') as century,
    DATE_DIFF(decade,'1009-01-01','2009-12-31') as decade;
Copy

year

month

day

year2

10

131

4016

10

week

century

decade

574

10

100

Output Code:
 SELECT
    DATEDIFF(year, '2009-01-01', '2019-12-31') as year,
    DATEDIFF(month, '2009-01-01', '2019-12-31') as month,
    DATEDIFF(day, '2009-01-01', '2019-12-31') as day,
    DATEDIFF(year, '2009-01-01 00:00:00':: TIMESTAMP_NTZ, '2019-12-31 00:00:00':: TIMESTAMP_NTZ) AS year2;


SELECT
    DATEDIFF(week,'2009-01-01','2019-12-31') as week,
    DATEDIFF(YEAR, '1009-01-01', '2009-12-31') / 100 as century,
    DATEDIFF(YEAR, '1009-01-01', '2009-12-31') / 10 as decade;
Copy

year

month

day

year2

10

131

4016

10

week

century

decade

574

10

100

Know Issues

No issues were found.

Related EWIs

There are no known issues.

DATE_PART

Description

DATE_PART extracts date part values from an expression. DATE_PART is a synonym of the PGDATE_PART function.

(RedShift SQL Language Reference DATE_PART function)

Valid datepart in Snowflake for DATE_PART:

  • second, seconds

  • minute, minutes

  • hour, hours

  • day, days

  • week

  • dayofweek

  • dayofyear

  • month, months

  • quarter, quarters

  • year, years

  • epoch

Invalid formats in Snowflake for DATE_PART:

  • microsecond, microseconds

  • millisecond, milliseconds

  • weeks

  • decade, decades

  • century, centuries

  • millennium, millennia

Grammar Syntax

{PGDATE_PART | DATE_PART}(datepart, {date|timestamp})
Copy

Warning

This function is partially supported by Snowflake.

Sample Source Patterns

Input Code:

 SELECT DATE_PART(minute, timestamp '2023-01-04 04:05:06.789') AS dateMinute,
    PGDATE_PART(dayofweek, timestamp '2022-05-02 04:05:06.789') AS dateDayofweek,
    "DATE_PART"('month', date '2022-05-02') AS dateMonth,
    pgdate_part('weeks'::text, '2023-02-28'::date::timestamp without time zone) AS wks;
    
SELECT DATE_PART(weeks, date '2023-02-28') AS wks,
    DATE_PART(decade, date '2023-02-28') AS dec,
    PGDATE_PART(century, date '2023-02-28') AS cen;
Copy

dateMinute

dateDayofweek

dateMonth

wks

5

1

5

9

Output Code:
 SELECT
    DATE_PART(minute, timestamp '2023-01-04 04:05:06.789') AS dateMinute,
    DATE_PART(dayofweek, timestamp '2022-05-02 04:05:06.789') AS dateDayofweek,
    DATE_PART('month', date '2022-05-02') AS dateMonth,
    DATE_PART(week, '2023-02-28'::date:: TIMESTAMP_NTZ) AS wks;

SELECT
    DATE_PART(week, date '2023-02-28') AS wks,
    !!!RESOLVE EWI!!! /*** SSC-EWI-PG0005 - decade FORMAT MAY FAIL OR MAY HAVE A DIFFERENT BEHAVIOR IN SNOWFLAKE. ***/!!!
    DATE_PART(decade, date '2023-02-28') AS dec,
    !!!RESOLVE EWI!!! /*** SSC-EWI-PG0005 - century FORMAT MAY FAIL OR MAY HAVE A DIFFERENT BEHAVIOR IN SNOWFLAKE. ***/!!!
    DATE_PART(century, date '2023-02-28') AS cen;
Copy

dateMinute

dateDayofweek

dateMonth

wks

5

1

5

9

Know Issues

No issues were found.

Related EWIs

  • SSC-EWI-PGOOO5: The current date/numeric format may have a different behavior in Snowflake.

DATE_TRUNC

Description

The DATE_TRUNC function truncates a timestamp expression or literal based on the date part that you specify, such as hour, day, or month.

(Redshift SQL Language Reference DATE_TRUNC function).

In Snowflake this function truncates a DATE, TIME, or TIMESTAMP value to the specified precision.

Valid <datepart> in Snowflake:

  • microsecond, microseconds

  • millisecond, milliseconds

  • second, seconds

  • minute, minutes

  • hour, hours

  • day, days

  • week

  • month, months

  • quarter, quarters

  • year, years

Invalid formats in Snowflake:

  • Weeks

  • decade, decades

  • century, centuries

  • millennium, millennia

This function is fully supported in Snowflake.

For more information about quoted identifiers in functions, click here.

Grammar Syntax

 DATE_TRUNC('datepart', timestamp)
Copy

Sample Source Patterns

Supported date parts

Input Code:
 SELECT
    DATE_TRUNC('second', TIMESTAMP '2024-02-02 04:05:06.789') AS sec,
    DATE_TRUNC('hours', TIMESTAMP '2024-02-02 04:05:06.789') AS hrs,
    DATE_TRUNC('week', TIMESTAMP '2024-02-02 04:05:06.789') AS wk, 
    "DATE_TRUNC"('month', TIMESTAMP '2024-02-02 04:05:06.789') AS mth,
    "date_trunc"('quarters', TIMESTAMP '2024-02-02 04:05:06.789') AS qtr,
    date_trunc('second'::text, '2024-02-02 04:05:06.789'::timestamp without time zone) AS sec2;
Copy
SECHRSWKMTHQTRSEC2
2024-02-02 04:05:06.0000002024-02-02 04:00:00.0000002024-01-29 00:00:00.0000002024-02-01 00:00:00.0000002024-01-01 00:00:00.0000002024-02-02 04:05:06.000000
Output Code:
 SELECT
    DATE_TRUNC('second', TIMESTAMP '2024-02-02 04:05:06.789') AS sec,
    DATE_TRUNC('hours', TIMESTAMP '2024-02-02 04:05:06.789') AS hrs,
    DATE_TRUNC('week', TIMESTAMP '2024-02-02 04:05:06.789') AS wk,
    DATE_TRUNC('month', TIMESTAMP '2024-02-02 04:05:06.789') AS mth,
    DATE_TRUNC('quarters', TIMESTAMP '2024-02-02 04:05:06.789') AS qtr,
    date_trunc('second','2024-02-02 04:05:06.789':: TIMESTAMP_NTZ) AS sec2;
Copy
SECHRSWKMTHQTRSEC2
2024-02-02 04:05:06.0002024-02-02 04:00:00.0002024-01-29 00:00:00.0002024-02-01 00:00:00.0002024-01-01 00:00:00.0002024-02-02 04:05:06.000000

Invalid date parts

This transformation is performed in order to emulate Redshift behavior for the following date parts

  • decade, decades

  • century, centuries

  • millennium, millennia

Input Code:
 SELECT
    DATE_TRUNC('weeks', TIMESTAMP '1990-02-02 04:05:06.789') AS wks,
    DATE_TRUNC('decade', TIMESTAMP '1990-02-02 04:05:06.789') AS dec,    
    DATE_TRUNC('century', TIMESTAMP '1990-02-02 04:05:06.789') AS c,        
    DATE_TRUNC('millennium', TIMESTAMP '1990-02-02 04:05:06.789') AS m;
Copy

WKS

DEC

C

M

1990-01-29 00:00:00.000000

1990-01-01 00:00:00.000000

1901-01-01 00:00:00.000000

1001-01-01 00:00:00.000000

Output Code:
 SELECT
    DATE_TRUNC(week, TIMESTAMP '1990-02-02 04:05:06.789') AS wks,
    DATEADD(year, -(EXTRACT(year FROM TIMESTAMP '1990-02-02 04:05:06.789')) % 10, DATE_TRUNC(year, TIMESTAMP '1990-02-02 04:05:06.789')) AS dec,
    DATEADD(year, -(EXTRACT(year FROM TIMESTAMP '1990-02-02 04:05:06.789') - 1) % 100, DATE_TRUNC(year, TIMESTAMP '1990-02-02 04:05:06.789')) AS c,
    DATEADD(year, -(EXTRACT(year FROM TIMESTAMP '1990-02-02 04:05:06.789') - 1) % 1000, DATE_TRUNC(year, TIMESTAMP '1990-02-02 04:05:06.789')) AS m;
Copy

WKS

DEC

C

M

1990-01-29 00:00:00.000

1990-01-01 00:00:00.000

1901-01-01 00:00:00.000

1001-01-01 00:00:00.000

For more information please refer to the following documentation in Snowflake:

Known Issues

In Amazon Redshift, the default precision for timestamps is 6 digits (microseconds), while in Snowflake, the default precision is 9 digits (nanoseconds). Due to these differences in precision, it’s important to consider your specific needs when working with timestamps. If you require different precision in either platform, you can use the following options.

  1. Use ALTER SESSION:

     --This example is for 2 digits for precision (FF2).
    ALTER SESSION SET TIMESTAMP_OUTPUT_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF2';
    
Copy

Please note that depending on the data type used to store the value obtained with DATE_TRUNC(), there may be limitations in precision that could result in a loss of accuracy.

Related EWIs

There are no known issues.

DATEADD

Description

Increments a DATE, TIME, TIMETZ, or TIMESTAMP value by a specified interval.

(RedShift SQL Language Reference DATEADD function)

The DATEADD function in Amazon Redshift adds a specified time interval to a date or timestamp. Where datepart is the type of interval (like ‘day’ or ‘month’), interval is the number of units to add (positive or negative), and date is the original date.

Valid datepart in Snowflake for DATEADD:

  • microsecond, microseconds

  • millisecond, milliseconds

  • second, seconds

  • minute, minutes

  • hour, hours

  • day, days

  • week

  • month, months

  • quarter, quarters

  • year, years

Unsupported formats in Snowflake for DATEADD:

  • weeks

  • decade, decades

  • century, centuries

  • millennium, millennia

Grammar Syntax

 DATEADD(datepart, interval, date)
Copy

Sample Source Patterns

Supported date parts:

Input Code:
 SELECT dateadd(year, 1, '2024-02-29') AS D1, dateadd(year, 1, '2023-02-28') AS D2
        date_add('year'::text, 1::bigint, '2024-02-29 00:00:00'::timestamp without time zone) AS D3;
Copy

D1

D2

D3

2025-03-01 00:00:00.000000

2024-02-28 00:00:00.000000

2025-03-01 00:00:00.000000

Output Code:
 SELECT
DATEADD(day, 1, dateadd(year, 1, '2024-02-29')) AS D1,
DATEADD(year, 1, '2023-02-28') AS D2,
DATEADD('year', 1, '2024-02-29 00:00:00':: TIMESTAMP_NTZ) AS D3;
Copy

D1

D2

D3

2025-03-01 00:00:00.000

2024-02-28 00:00:00.000

2025-03-01 00:00:00.000000

Unsupported date parts

This transformation is performed in order to emulate Redshift behavior for the following date parts

  • weeks is converted to its equivalent supported date part, week.

The following date parts are transformed to their equivalent in years:

  • decade, decades : converted to year times ten.

  • century, centuries: converted to year times one hundred.

  • millennium, millennia: converted to year times one thousand.

Input Code:
 SELECT DATEADD(weeks, 1, '2023-02-28') AS wks,
        DATEADD(decade, 1, '2023-02-28') AS dec,
        DATEADD(century, 1, '2023-02-28') AS cen,
        DATEADD(millennium, 1, '2023-02-28') AS mill;
        

SELECT 
    DATEADD(millennium, num_interval, '2023-02-28') AS result
FROM (
    SELECT 5 AS num_interval
);
Copy

wks

dec

cen

mill

2023-03-07 00:00:00.000000

2033-02-28 00:00:00.000000

2123-02-28 00:00:00.000000

3023-02-28 00:00:00.000000

Output Code:
 SELECT
    DATEADD(week, 1, '2023-02-28') AS wks,
    DATEADD(YEAR, 1 * 10, '2023-02-28') AS dec,
    DATEADD(YEAR, 1 * 100, '2023-02-28') AS cen,
    DATEADD(YEAR, 1 * 1000, '2023-02-28') AS mill;


SELECT
    DATEADD(YEAR, num_interval * 1000, '2023-02-28') AS result
FROM (
    SELECT 5 AS num_interval
);
Copy

wks

dec

cen

mill

2023-03-07 00:00:00.000000

2033-02-28 00:00:00.000000

2123-02-28 00:00:00.000000

3023-02-28 00:00:00.000000

Notes

In Amazon Redshift, when you use DATEADD to add years to February 29 of a leap year, it rolls over to March 1 of the following year because the next year is not a leap year. Redshift handles date arithmetic by moving to the nearest valid date. Since February 29 does not exist in non-leap years, it defaults to March 1. For example, adding one year to February 29, 2020, results in March 1, 2021.

Related EWIs

There are no known issues.

DATEDIFF

Description

DATEDIFF returns the difference between the date parts of two date or time expressions.

(RedShift SQL Language Reference DATEDIFF function)

Valid datepart in Snowflake for DATEDIFF:

  • microsecond, microseconds

  • millisecond, milliseconds

  • second, seconds

  • minute, minutes

  • hour, hours

  • day, days

  • week

  • month, months

  • quarter, quarters

  • year, years

Unsupported formats in Snowflake for DATEDIFF:

  • decade, decades

  • century, centuries

  • millennium, millennia

Grammar Syntax

 DATEDIFF( datepart, {date|time|timetz|timestamp}, {date|time|timetz|timestamp} )
Copy

This function is fully supported in Snowflake.

Sample Source Patterns

Input Code:

 SELECT DATEDIFF(year,'2009-01-01','2019-12-31') as year,
    DATEDIFF(month,'2009-01-01','2019-12-31') as month,
    DATEDIFF(day,'2009-01-01','2019-12-31') as day;
    

SELECT DATEDIFF(week,'2009-01-01','2019-12-31') as week,
    DATEDIFF(century,'1009-01-01','2009-12-31') as century,
    DATEDIFF(decade,'1009-01-01','2009-12-31') as decade;
Copy

year

month

day

10

131

4016

week

century

decade

574

10

100

Output Code:
 SELECT DATEDIFF(year,'2009-01-01','2019-12-31') as year,
    DATEDIFF(month,'2009-01-01','2019-12-31') as month,
    DATEDIFF(day,'2009-01-01','2019-12-31') as day;


SELECT DATEDIFF(week,'2009-01-01','2019-12-31') as week,
    DATEDIFF(YEAR, '1009-01-01', '2009-12-31') / 100 as century,
    DATEDIFF(YEAR, '1009-01-01', '2009-12-31') / 10 as decade;
Copy

year

month

day

10

131

4016

week

century

decade

574

10

100

Know Issues

No issues were found.

Related EWIs

There are no known issues.

GETDATE

Description

GETDATE returns the current date and time in the current session time zone (UTC by default). It returns the start date or time of the current statement, even when it is within a transaction block. (Redshift SQL Language Reference GETDATE function).

In Snowflake this function provides the current date and time with nanosecond precision (up to 9 digits) and is timezone-aware.

This function is fully supported in Snowflake.

For more information about quoted identifiers in functions, click here.

Grammar Syntax

 GETDATE()
Copy

Sample Source Patterns

Input Code:

 CREATE TABLE table1 (
    id INTEGER,
    date_t DATE DEFAULT getdate(),
    time_t TIME DEFAULT  "getdate"(),
    timestamp_t TIMESTAMP DEFAULT "GETDATE"(),
    timestamptz_t TIMESTAMPTZ DEFAULT getdate()
);

INSERT INTO table1(id) VALUES (1);

SELECT * FROM table1;
Copy

ID

DATE_T

TIME_T

TIMESTAMP_T

TIMESTAMPTZ_T

1

2024-11-20

17:51:00

2024-11-20 17:51:00.000000

2024-11-20 17:51:00.000000 +00:00

Output Code:
 CREATE TABLE table1 (
    id INTEGER,
    date_t DATE DEFAULT getdate() :: DATE,
    time_t TIME DEFAULT GETDATE() :: TIME,
    timestamp_t TIMESTAMP DEFAULT GETDATE(),
    timestamptz_t TIMESTAMP_TZ DEFAULT getdate()
)
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "11/20/2024",  "domain": "test" }}';

INSERT INTO table1 (id) VALUES (1);

SELECT * FROM
    table1;
Copy

ID

DATE_T

TIME_T

TIMESTAMP_T

TIMESTAMPTZ_T

1

2024-11-20

17:51:00

2024-11-20 17:51:00.000

2024-11-20 17:51:00.000 +0000

Known Issues

In Amazon Redshift, the default precision for timestamps is 6 digits (microseconds), while in Snowflake, the default precision is 9 digits (nanoseconds). Due to these differences in precision, it’s important to consider your specific needs when working with timestamps. If you require different precision in either platform, you can use the following options.

  1. Use ALTER SESSION:

     --This example is for 2 digits for precision (FF2).
    ALTER SESSION SET TIMESTAMP_OUTPUT_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF2';
    
Copy

Please note that depending on the data type used to store the value obtained with GETDATE(), there may be limitations in precision that could result in a loss of accuracy.

Related EWIs

There are no known issues.

TIMESTAMP

Description

The TIMESTAMP function is a system function that casts a string value into a timestamp.

This function is fully supported in Snowflake.

Grammar Syntax

 select "timestamp"(VARCHAR);
Copy

Sample Source Patterns

Input Code:

 select "timestamp"('2024-03-01 3:22:33');
Copy

“timestamp”

2024-03-01 03:22:33.000000

Output Code:

 select
TO_TIMESTAMP('2024-03-01 3:22:33');
Copy

TO_TIMESTAMP

2024-03-01 03:22:33.000

Know Issues

In Amazon Redshift, the default precision for timestamps is 6 digits (microseconds), while in Snowflake, the default precision is 9 digits (nanoseconds). Due to these differences in precision, it’s important to consider your specific needs when working with timestamps. If you require different precision in either platform, you can use the following options.

  1. Use ALTER SESSION:

     --This example is for 2 digits for precision (FF2).
    ALTER SESSION SET TIMESTAMP_OUTPUT_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF2';
    
Copy

Please note that depending on the data type used to store the value obtained with GETDATE(), there may be limitations in precision that could result in a loss of accuracy.

Related EWIs

There are no known issues.

TRUNC

Description

Truncates a TIMESTAMP and returns a DATE.

For more information, please refer to TRUNC function.

Grammar Syntax

 TRUNC(timestamp)
Copy

Note

This function is supported by Snowflake. However in Snowflake it truncates a DATE, TIME or TIMESTAMP value to the specified precision.

Sample Source Patterns

Input Code:

 CREATE TABLE test_date_trunc (
    mytimestamp TIMESTAMP,
    mydate DATE,
    mytimestamptz TIMESTAMPTZ
 );

INSERT INTO test_date_trunc VALUES (
  '2024-05-09 08:50:57.891 -0700',
  '2024-05-09',
  '2024-05-09 08:50:57.891 -0700');

SELECT TRUNC(mytimestamp) AS date1,
       TRUNC(mydate) AS date2,
       TRUNC(mytimestamptz::TIMESTAMP) AS date3,
       TRUNC('2024-05-09 08:50:57.891 -0700'::TIMESTAMP) AS date4,
       TRUNC('2024-05-09 08:50:57.891 -0700'::DATE) AS date5,
       CAST(TRUNC('2024-05-09 08:50:57.891 -0700'::TIMESTAMP) AS TIMESTAMP) AS date6
  FROM test_date_trunc;
Copy

DATE1

DATE2

DATE3

DATE4

DATE5

DATE6

2024-05-09

2024-05-09

2024-05-09

2024-05-09

2024-05-09

2024-05-09 00:00:00.000

Output Code:

CREATE TABLE test_date_trunc (
    mytimestamp TIMESTAMP,
    mydate DATE,
    mytimestamptz TIMESTAMP_TZ
)
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "11/13/2024",  "domain": "test" }}';

INSERT INTO test_date_trunc
VALUES (
  '2024-05-09 08:50:57.891 -0700',
  '2024-05-09',
  '2024-05-09 08:50:57.891 -0700');

SELECT
    DATE( TRUNC(mytimestamp, 'day')) AS date1,
    DATE(
    TRUNC(mydate, 'day')) AS date2,
    DATE(
    TRUNC(mytimestamptz::TIMESTAMP, 'day')) AS date3,
    DATE(
    TRUNC('2024-05-09 08:50:57.891 -0700'::TIMESTAMP, 'day')) AS date4,
    DATE(
    TRUNC('2024-05-09 08:50:57.891 -0700'::DATE, 'day')) AS date5,
    CAST(DATE(TRUNC('2024-05-09 08:50:57.891 -0700'::TIMESTAMP, 'day')) AS TIMESTAMP) AS date6
FROM
    test_date_trunc;
Copy

DATE1

DATE2

DATE3

DATE4

DATE5

DATE6

2024-05-09

2024-05-09

2024-05-09

2024-05-09

2024-05-09

2024-05-09 00:00:00.000

Related EWIs

There are no known issues.

Window Functions

AVG

Description

The AVG window function returns the average (arithmetic mean) of the input expression values.

(Redshift SQL Language Reference AVG function)

This function is fully supported in Snowflake.

For more information about quoted identifiers in functions, click here.

Grammar Syntax

 AVG ( [ALL ] expression ) OVER
(
[ PARTITION BY expr_list ]
[ ORDER BY order_list 
                        frame_clause ]
)
Copy

Sample Source Patterns

Setup data

 CREATE TABLE example_table (
    my_smallint_column SMALLINT,
    my_integer_column INTEGER,
    my_bigint_column BIGINT,
    my_numeric_column NUMERIC,
    my_decimal_column DECIMAL,
    my_real_column REAL,
    my_double_precision_column DOUBLE PRECISION,
    my_super_column SUPER
);

INSERT INTO example_table (
    my_smallint_column,
    my_integer_column,
    my_bigint_column,
    my_numeric_column,
    my_decimal_column,
    my_real_column,
    my_double_precision_column,
    my_super_column
)
VALUES
    (1, 100, 10000000000, 123.45, 678.90, 3.14, 2.71828, 123),
    (2, 200, 20000000000, 234.56, 789.01, 2.71, 3.14159, 456),
    (3, 300, 30000000000, 345.67, 890.12, 1.41, 1.61803, 789),
    (4, 400, 40000000000, 456.78, 901.23, 1.61, 1.41421, 101112),
    (5, 500, 50000000000, 567.89, 123.45, 2.17, 3.14159, 131415);
Copy
 CREATE TABLE example_table (
    my_smallint_column SMALLINT,
    my_integer_column INTEGER,
    my_bigint_column BIGINT,
    my_numeric_column NUMERIC,
    my_decimal_column DECIMAL,
    my_real_column REAL,
    my_double_precision_column DOUBLE PRECISION,
    my_super_column VARIANT
)
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "11/27/2024",  "domain": "test" }}';

INSERT INTO example_table (
    my_smallint_column,
    my_integer_column,
    my_bigint_column,
    my_numeric_column,
    my_decimal_column,
    my_real_column,
    my_double_precision_column,
    my_super_column
)
VALUES
    (1, 100, 10000000000, 123.45, 678.90, 3.14, 2.71828, 123),
    (2, 200, 20000000000, 234.56, 789.01, 2.71, 3.14159, 456),
    (3, 300, 30000000000, 345.67, 890.12, 1.41, 1.61803, 789),
    (4, 400, 40000000000, 456.78, 901.23, 1.61, 1.41421, 101112),
    (5, 500, 50000000000, 567.89, 123.45, 2.17, 3.14159, 131415);
Copy
Input Code:
 SELECT
    my_smallint_column,
    AVG(my_integer_column) OVER (PARTITION BY my_smallint_column) AS avg_integer_column,
    AVG(my_numeric_column) OVER () AS avg_numeric_column_all,
    AVG(my_decimal_column) OVER (PARTITION BY my_smallint_column ORDER BY my_integer_column ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS cumulative_avg_decimal_column
FROM
    example_table
ORDER BY my_smallint_column
LIMIT 3;
Copy
my_smallint_columnavg_integer_columnavg_numeric_column_allcumulative_avg_decimal_column
1100345679
2200345789
3300345890
Output Code:
 SELECT
    my_smallint_column,
    AVG(my_integer_column) OVER (PARTITION BY my_smallint_column) AS avg_integer_column,
    AVG(my_numeric_column) OVER () AS avg_numeric_column_all,
    AVG(my_decimal_column) OVER (PARTITION BY my_smallint_column ORDER BY my_integer_column ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS cumulative_avg_decimal_column
FROM
    example_table
ORDER BY my_smallint_column
LIMIT 3;
Copy
my_smallint_columnavg_integer_columnavg_numeric_column_allcumulative_avg_decimal_column
1100345679
2200345789
3300345890

Note

AVG, depending on the data type, can behave differently in terms of rounding and formatting, which may result in different precision or decimal places when comparing Redshift with Snowflake.

Related EWIs

There are no known issues.

COUNT

Description

The COUNT window function counts the rows defined by the expression.

(Redshift SQL Language Reference COUNT function)

This function is fully supported in Snowflake.

For more information about quoted identifiers in functions, click here.

Grammar Syntax

 COUNT ( * | [ ALL ] expression) OVER
(
[ PARTITION BY expr_list ]
[ ORDER BY order_list frame_clause ]
)
Copy

Sample Source Patterns

Setup data

 CREATE TABLE sales_data (
    sale_id INT,
    product_id INT,
    sale_date DATE,
    amount DECIMAL(10, 2)
);

INSERT INTO sales_data (sale_id, product_id, sale_date, amount) VALUES
(1, 101, '2024-01-01', 200.00),
(2, 102, '2024-01-02', 150.00),
(3, 101, '2024-01-03', 300.00),
(4, 101, '2024-01-03', 250.00),
(5, 103, '2024-01-04', 450.00),
(6, 102, '2024-01-05', 100.00),
(7, 104, '2024-01-05', 500.00),
(8, 101, '2024-01-06', 350.00);
Copy
Input Code:
 SELECT
    product_id,
    COUNT(ALL amount) OVER (PARTITION BY product_id) AS count_all_amount,
    "count"(*) OVER() AS total_sales,
    "COUNT"(*) OVER (PARTITION BY product_id ORDER BY sale_date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS cumulative_count
FROM
    sales_data
ORDER BY product_id;
Copy
PRODUCT_IDCOUNT_ALL_AMOUNTTOTAL_SALESCUMULATIVE_COUNT
101481
101482
101483
101484
102281
102282
103181
104181
Output Code:
 SELECT
    product_id,
    COUNT(ALL amount) OVER (PARTITION BY product_id) AS count_all_amount,
    COUNT(*) OVER() AS total_sales,
    COUNT(*) OVER (PARTITION BY product_id ORDER BY sale_date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS cumulative_count
FROM
    sales_data
ORDER BY product_id;
Copy
PRODUCT_IDCOUNT_ALL_AMOUNTTOTAL_SALESCUMULATIVE_COUNT
101481
101482
101483
101484
102281
102282
103181
104181

Known Issues

No issues were found.

Related EWIs

There are no known issues.

DENSE_RANK

Description

The DENSE_RANK window function determines the rank of a value in a group of values, based on the ORDER BY expression in the OVER clause. The DENSE_RANK function differs from RANK in one respect: if two or more rows tie, there is no gap in the sequence of ranked values.

For more information, please refer to DENSE_RANK function.

Grammar Syntax

 DENSE_RANK() OVER
(
[ PARTITION BY expr_list ]
[ ORDER BY order_list ]
) 
Copy

Note

This function is supported by Snowflake. However the ORDER BY clause is mandatory in Snowflake. In case it doesn’t exist, an ORDER BY 1 will be added in order to ensure full equivalence.

Sample Source Patterns

Input Code:

 CREATE TABLE corn_production 
(
    farmer_ID INTEGER, 
    state varchar, 
    bushels float
);

INSERT INTO corn_production (farmer_ID, state, bushels) VALUES
    (1, 'Iowa', 100),
    (2, 'Iowa', 110),
    (3, 'Kansas', 120),
    (4, 'Kansas', 130),
    (5, 'Kansas', 110);

SELECT DENSE_RANK() OVER (ORDER BY bushels DESC) AS rank1,
DENSE_RANK() OVER (PARTITION BY state ORDER BY bushels DESC) AS rank2,
DENSE_RANK() OVER () AS rank3,
DENSE_RANK() OVER (PARTITION BY state) AS rank4
FROM corn_production;
Copy

rank1

rank2

rank3

rank4

1

1

1

1

2

2

1

1

3

3

1

1

3

1

1

1

4

2

1

1

Output Code:

 CREATE TABLE corn_production
(
    farmer_ID INTEGER,
    state varchar,
    bushels float
)
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "09/27/2024" }}';

INSERT INTO corn_production (farmer_ID, state, bushels) VALUES
    (1, 'Iowa', 100),
    (2, 'Iowa', 110),
    (3, 'Kansas', 120),
    (4, 'Kansas', 130),
    (5, 'Kansas', 110);

SELECT DENSE_RANK() OVER (ORDER BY bushels DESC) AS rank1,
DENSE_RANK() OVER (PARTITION BY state ORDER BY bushels DESC) AS rank2,
DENSE_RANK()
    OVER (
    ORDER BY 1) AS rank3,
DENSE_RANK()
    OVER (PARTITION BY state
    ORDER BY 1) AS rank4
FROM
    corn_production;
Copy

rank1

rank2

rank3

rank4

1

1

1

1

2

2

1

1

3

3

1

1

3

1

1

1

4

2

1

1

Related EWIs

There are no known issues.

LEAD

Description

The LEAD window function returns the values for a row at a given offset below (after) the current row in the partition.

(Redshift SQL Language Reference LEAD window function)

Warning

This function is partially supported in Snowflake. In Redshift the offset can be a constant integer or an expression that evaluates to an integer. In Snowflake it has a limitation that it can only be a constant.

For more information about quoted identifiers in functions, click here.

Grammar Syntax

 LEAD (value_expr [, offset ])
[ IGNORE NULLS | RESPECT NULLS ]
OVER ( [ PARTITION BY window_partition ] ORDER BY window_ordering )
Copy

Sample Source Patterns

Setup data

 CREATE TABLE sales (
    sale_id INT,
    customer_id INT,
    sale_date DATE,
    sale_amount DECIMAL(10, 2)
);

INSERT INTO sales (sale_id, customer_id, sale_date, sale_amount)
VALUES
    (1, 301, '2024-01-01', 150.00),
    (2, 301, '2024-01-02', NULL),
    (3, 301, '2024-01-03', 250.00),
    (4, 301, '2024-01-04', 350.00),
    (5, 302, '2024-02-01', 100.00),
    (6, 302, '2024-02-02', 200.00),
    (7, 302, '2024-02-03', NULL),
    (8, 302, '2024-02-04', 300.00);
Copy
Input Code:
 SELECT
    LEAD(sale_amount) OVER (PARTITION BY customer_id ORDER BY sale_date) AS lead,
    LEAD(sale_amount, 2) RESPECT NULLS OVER (PARTITION BY customer_id ORDER BY sale_date) AS lead_respect_null,
    LEAD(sale_amount, 1) IGNORE NULLS OVER (PARTITION BY customer_id ORDER BY sale_date) AS lead_ignore_nulls
FROM sales;
Copy

LEAD

LEAD_RESPECT_NULL

LEAD_IGNORE_NULLS

200.00

null

200.00

null

300.00

300.00

300.00

null

300.00

null

null

null

null

250.00

250.00

250.00

350.00

250.00

350.00

null

350.00

null

null

null

Output Code:
 SELECT
    LEAD(sale_amount) OVER (PARTITION BY customer_id ORDER BY sale_date) AS lead,
    LEAD(sale_amount, 2) RESPECT NULLS OVER (PARTITION BY customer_id ORDER BY sale_date) AS lead_respect_null,
    LEAD(sale_amount, 1) IGNORE NULLS OVER (PARTITION BY customer_id ORDER BY sale_date) AS lead_ignore_nulls
FROM sales;
Copy

LEAD

LEAD_RESPECT_NULL

LEAD_IGNORE_NULLS

200.00

null

200.00

null

300.00

300.00

300.00

null

300.00

null

null

null

null

250.00

250.00

250.00

350.00

250.00

350.00

null

350.00

null

null

null

Known Issues

No issues were found.

Related EWIs

There are no known issues.

RANK

Description

The RANK window function determines the rank of a value in a group of values, based on the ORDER BY expression in the OVER clause. If the optional PARTITION BY clause is present, the rankings are reset for each group of rows.

For more information, please refer to RANK function.

Grammar Syntax

 RANK () OVER
(
[ PARTITION BY expr_list ]
[ ORDER BY order_list ]
) 
Copy

Note

This function is supported by Snowflake. However the ORDER BY clause is mandatory in Snowflake. In case it doesn’t exist, an ORDER BY 1 will be added in order to ensure full equivalence.

Sample Source Patterns

Input Code:

 CREATE TABLE corn_production 
(
    farmer_ID INTEGER, 
    state varchar, 
    bushels float
);

INSERT INTO corn_production (farmer_ID, state, bushels) VALUES
    (1, 'Iowa', 100),
    (2, 'Iowa', 110),
    (3, 'Kansas', 120),
    (4, 'Kansas', 130),
    (5, 'Kansas', 110);

SELECT RANK() OVER (ORDER BY bushels DESC) AS rank1,
RANK() OVER (PARTITION BY state ORDER BY bushels DESC) AS rank2,
RANK() OVER () AS rank3,
RANK() OVER (PARTITION BY state) AS rank4
FROM corn_production;
Copy

rank1

rank2

rank3

rank4

1

1

1

1

2

2

1

1

3

3

1

1

3

1

1

1

5

2

1

1

Output Code:

 CREATE TABLE corn_production
(
    farmer_ID INTEGER,
    state varchar,
    bushels float
)
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": {  "major": 0,  "minor": 0,  "patch": "0" }, "attributes": {  "component": "redshift",  "convertedOn": "09/27/2024" }}';

INSERT INTO corn_production (farmer_ID, state, bushels) VALUES
    (1, 'Iowa', 100),
    (2, 'Iowa', 110),
    (3, 'Kansas', 120),
    (4, 'Kansas', 130),
    (5, 'Kansas', 110);

SELECT RANK() OVER (ORDER BY bushels DESC) AS rank1,
RANK() OVER (PARTITION BY state ORDER BY bushels DESC) AS rank2,
RANK()
    OVER (
    ORDER BY 1) AS rank3,
RANK()
    OVER (PARTITION BY state
    ORDER BY 1) AS rank4
FROM
    corn_production;
Copy

rank1

rank2

rank3

rank4

1

1

1

1

2

2

1

1

3

3

1

1

3

1

1

1

5

2

1

1

Related EWIs

There are no known issues.

ROW_NUMBER

Description

The ROW_NUMBER window function assigns an ordinal number of the current row within a group of rows, counting from 1, based on the ORDER BY expression in the OVER clause. (RedShift SQL Language Reference ROW_NUMBER window function)

Grammar Syntax

 ROW_NUMBER() OVER(
  [ PARTITION BY expr_list ]
  [ ORDER BY order_list ]
)
Copy

This function is fully supported in Snowflake.

The ORDER BY clause is mandatory in Snowflake. In case it doesn’t exist, an ORDER BY 1 will be added in order to ensure full equivalence.

For more information about quoted identifiers in functions, click here.

Sample Source Patterns

Setup data

 CREATE TABLE corn_production 
(
    farmer_ID INTEGER, 
    state varchar, 
    bushels float
);

INSERT INTO corn_production (farmer_ID, state, bushels) VALUES
    (1, 'Iowa', 100),
    (2, 'Iowa', 110),
    (3, 'Kansas', 120),
    (4, 'Kansas', 130),
    (5, 'Kansas', 110);
Copy
Input Code:
 SELECT  ROW_NUMBER()  OVER ( ORDER BY bushels DESC) AS row1,
        ROW_NUMBER()  OVER ( PARTITION BY state ORDER BY bushels DESC) AS row2,
        ROW_NUMBER()  OVER () AS row3,
        ROW_NUMBER()  OVER ( PARTITION BY state) AS row4
FROM corn_production;
Copy

ROW1

ROW2

ROW3

ROW4

1

1

1

1

2

2

2

2

4

3

3

3

3

1

4

1

5

2

5

2

Output Code:

 SELECT  ROW_NUMBER()  OVER ( ORDER BY bushels DESC) AS row1,
        ROW_NUMBER()  OVER ( PARTITION BY state ORDER BY bushels DESC) AS row2,
        ROW_NUMBER()
        OVER (
        ORDER BY
      1) AS row3,
        ROW_NUMBER()
        OVER ( PARTITION BY state
        ORDER BY
      1) AS row4
FROM
        corn_production;
Copy

ROW1

ROW2

ROW3

ROW4

1

1

1

1

2

2

2

2

4

3

3

3

3

1

4

1

5

2

5

2

Known Issues

No issues were found.

Related EWIs

There are no known issues.