snowflake.ml.jobs.submit_file

snowflake.ml.jobs.submit_file(file_path: str, compute_pool: str, *, stage_name: str, args: Optional[list[str]] = None, env_vars: Optional[dict[str, str]] = None, pip_requirements: Optional[list[str]] = None, external_access_integrations: Optional[list[str]] = None, query_warehouse: Optional[str] = None, spec_overrides: Optional[dict[str, Any]] = None, num_instances: Optional[int] = None, enable_metrics: bool = False, database: Optional[str] = None, schema: Optional[str] = None, session: Optional[Session] = None) MLJob[None]

Submit a Python file as a job to the compute pool.

Parameters:
  • file_path – The path to the file containing the source code for the job.

  • compute_pool – The compute pool to use for the job.

  • stage_name – The name of the stage where the job payload will be uploaded.

  • args – A list of arguments to pass to the job.

  • env_vars – Environment variables to set in container

  • pip_requirements – A list of pip requirements for the job.

  • external_access_integrations – A list of external access integrations.

  • query_warehouse – The query warehouse to use. Defaults to session warehouse.

  • spec_overrides – Custom service specification overrides to apply.

  • num_instances – The number of instances to use for the job. If none specified, single node job is created.

  • enable_metrics – Whether to enable metrics publishing for the job.

  • database – The database to use.

  • schema – The schema to use.

  • session – The Snowpark session to use. If none specified, uses active session.

Returns:

An object representing the submitted job.