Project definition files¶
A project definition file called snowflake.yml
declares a directory as a Snowflake Native App project. It is a version-controlled file that resides at the root of a Snowflake Native App project directory and can either be created manually or by Snowflake CLI as part of project initialization. As long as you can provide this structured file in the directory but choose to use your own independent project structure, Snowflake CLI can discover the relevant files and carry out its functionality as usual.
For Native Apps, your snowflake.yml
would look similar to the following:
definition_version: 2
entities:
pkg:
type: application package
identifier: <name_of_app_pkg>
stage: app_src.stage
manifest: app/manifest.yml
artifacts:
- src: app/*
dest: ./
- src: src/module-add/target/add-1.0-SNAPSHOT.jar
dest: module-add/add-1.0-SNAPSHOT.jar
- src: src/module-ui/src/*
dest: streamlit/
meta:
role: <your_app_pkg_owner_role>
warehouse: <your_app_pkg_warehouse>
post_deploy:
- sql_script: scripts/any-provider-setup.sql
- sql_script: scripts/shared-content.sql
app:
type: application
identifier: <name_of_app>
from:
target: pkg
debug: <true|false>
meta:
role: <your_app_owner_role>
warehouse: <your_app_warehouse>
Common entity properties¶
The following table describes common properties available for project definition entities for Native Apps. See Specify entities for more information on project definition entities.
Property |
Definition |
---|---|
type required, string |
The type of entity to manage. For Snowflake Native App, valid values include:
|
identifier optional, string |
Optional Snowflake identifier for the entity, both unquoted and quoted identifiers are supported. To use quoted identifiers, include the surrounding quotes in the YAML value (e.g. If not specified, the entity ID in the project definition is used as the identifier. |
meta.warehouse optional, string |
Warehouse used to run the scripts provided as part of Default: Warehouse specified for the connection in the Snowflake CLI
|
meta.role optional, string |
Role to use when creating the entity and provider-side objects. Note If you do not specify a role, Snowflake CLI attempts to use the default role assigned to your user in your Snowflake account. Typically, you specify this value in the Default: Role specified in the Snowflake CLI connection |
meta.post_deploy optional, sequence |
List of SQL scripts to execute after the entity has been created. The following example shows how to define these scripts in the project definition file: definition_version: 2
entities:
myapp_pkg:
type: application package
...
meta:
post_deploy:
- sql_script: scripts/post_deploy1.sql
- sql_script: scripts/post_deploy2.sql
These scripts are invoked by commands that create or update an entity. For example, running the You can also use templates in the post-deploy SQL scripts as well, as shown in the following sample script content: GRANT reference_usage on database provider_data to share in entity <% fn.str_to_id(ctx.entities.myapp_pkg.identifier) %>
|
meta.use_mixins optional, sequence |
Names of mixins to apply to this entity. See Project mixins for more information |
Application package entity properties¶
The following table describes common properties available for application package entities for Native Apps. See Specify entities for more information on project definition entities.
Property |
Definition |
---|---|
type required, string |
Must be |
manifest required, string |
The location of the Snowflake Native App |
deploy_root optional, string |
Subdirectory at the root of your project where the build step copies the artifacts. Once copied to this location, you can deploy them to a Snowflake stage. Default: |
generated_root optional, string |
Subdirectory of the deploy root where Snowflake CLI writes generated files. Default: |
stage optional, string |
Identifier of the stage that stores the application artifacts. The value uses the form Default: |
artifacts required, sequence |
List of file source and destination pairs to add to the deploy root, as well as an optional Snowpark annotation processor. You can use the following artifact properties”
If You can also pass in a string for each item instead of a Example without a processor: pkg:
artifacts:
- src: app/*
dest: ./
- src: streamlit/*
dest: streamlit/
- src: src/resources/images/snowflake.png
dest: streamlit/
Example with a processor: pkg:
artifacts:
- src: qpp/*
dest: ./
processors:
- name: snowpark
properties:
env:
type: conda
name: <conda_name>
|
distribution optional, string |
Distribution of the application package created by the Snowflake CLI. When running Default: |
scratch_stage optional, string |
Identifier of the stage that stores temporary scratch data used by Snowflake CLI. The value uses the form Default: |
Application entity properties¶
The following table describes common properties available for application entities for Native Apps. See Specify entities for more information on project definition entities.
Property |
Definition |
---|---|
type required, string |
Must be |
from.target required, string |
Application package from which to create this application entity. In the following example, from:
target: my_pkg
|
debug optional, boolean |
Whether to enable debug mode when using a named stage to create an application. Default: |
More information about artifacts processors¶
If you include the artifacts.processors
field in the project definition file, the snow app bundle
command invokes custom processing for Python code files in the src
directory or file.
This section covers a list of supported processors.
Snowpark processor¶
One of the processors supported by Snowflake CLI is snowpark
, which applies Snowpark annotation processing to Python files. The following shows the basic structure and syntax for different processing environments:
To execute code in a conda environment, use the following:
pkg: artifacts: - src: <some_src> dest: <some_dest> processors: - name: snowpark properties: env: type: conda name: <conda_name>
where
<conda_name>
is the name of the conda environment containing the Python interpreter and the Snowpark library you want to use for Snowpark annotation processing.To execute code in a Python virtual environment, use the following:
pkg: artifacts: - src: <some_src> dest: <some_dest> processors: - name: snowpark properties: env: type: venv path: <venv_path>
where
<venv_path>
is the path of the Python virtual environment containing the Python interpreter and the Snowpark library you want to use for Snowpark annotation processing. The path can be absolute or relative to the project directory.To execute code in the currently active environment, use any of the following equivalent definitions:
pkg: artifacts: - src: <some_src> dest: <some_dest> processors: - name: snowpark properties: env: type: current
or
pkg: artifacts: - src: <some_src> dest: <some_dest> processors: - name: snowpark
or
pkg: artifacts: - src: <some_src> dest: <some_dest> processors: - snowpark
For more information about custom processing, see Automatic SQL code generation and the snow app bundle command.
Templates processor¶
Snowflake Native App projects support templates in arbitrary files, which lets you expand templates in all files in an artifact’s src
directory.
You can enable this feature by including a templates
processor in the desired artifacts
definition, as shown in the following example:
definition_version: 2
entities:
pkg:
type: application package
identifier: myapp_pkg
artifacts:
- src: app/*
dest: ./
processors:
- templates
manifest: app/manifest.yml
app:
type: application
identifier: myapp_<% fn.get_username() %>
from:
target: pkg
When Snowflake CLI uploads the files to a stage, it automatically expands the templates before uploading them. For example, suppose your application contained an
app/README.md
file with the following content that includes the <% ctx.entities.pkg.identifier %>
template:
This is a README file for application package <% ctx.entities.pkg.identifier %>.
The template is then expanded to the following before uploading the file to a stage:
This is a README file for application package myapp_pkg.
Project definition overrides¶
Though your project directory must have a snowflake.yml
file, you can choose to customize the behavior of the Snowflake CLI by providing local overrides to snowflake.yml
, such as a new role to test out your own application package. These overrides must be put in the snowflake.local.yml
file that lives beside the base project definition. Snowflake suggests that you add it to your .gitignore
file so it won’t be version-controlled by git. All templates provided by Snowflake already include it in the .gitignore
file.
This overrides file must live in the same location as your snowflake.yml
file.
The snowflake.local.yml
file shares the exact schema as snowflake.yml
, except that every value that was required is now optional, in additional to the already optional ones. The following shows a sample snowflake.local.yml
file:
entities:
pkg:
meta:
role: <your_app_pkg_owner_role>
name: <name_of_app_pkg>
warehouse: <your_app_pkg_warehouse>
app:
debug: <true|false>
meta:
role: <your_app_owner_role>
name: <name_of_app>
warehouse: <your_app_warehouse>
Every snow app
command prioritizes the parameters in this file over those set in base snowflake.yml
configuration file. Sensible defaults already provide isolation between developers using the same Snowflake account to develop the same application project, so if you are just getting started we suggest not including an overrides file.
The final definition schema obtained after overriding snowflake.yml
with snowflake.local.yml
is called the resolved project definition.
Limitations¶
Currently, Snowflake CLI does not support
Multiple override files.
A blank override file. Only create this file if you want to override a value from
snowflake.yml
.