Hugging Face pipeline¶
The Snowflake Model Registry supports any Hugging Face model defined as a transformer that can be loaded with the transformers.Pipeline method.
Use one of the following methods to log a Hugging Face model to the Model Registry:
Import and deploy a model from Hugging Face using Snowsight. See Import and deploy models from an external service for instructions.
Create a
snowflake.ml.model.models.huggingface.TransformersPipelineinstance and calllog_model():Important
If you don’t specify a
compute_pool_for_logargument, the model is logged using the default CPU compute pool.If you specify a
compute_pool_for_logargument, the model is logged using the specified compute pool.If you specify
compute_pool_for_logargument as None, the model files are downloaded locally and then uploaded to the model registry. This requires huggingface-hub to be installed.
Load the model from Hugging Face in memory and log it to Model Registry:
If you are using Snowflake Notebooks, in order to download the weights of the model, you need to have an external access integration attached to your notebook. This integration is required to allow egress to the following hosts:
huggingface.cohub-ci.huggingface.cocdn-lfs-us-1.hf.cocdn-lfs-eu-1.hf.cocdn-lfs.hf.cotransfer.xethub.hf.cocas-server.xethub.hf.cocas-bridge.xethub.hf.c
Note
This list of hosts are only those required for accessing Hugging Face, and may change at any time. Your model may require artifacts from other sources, which should be added to the network rule as allowed for egress.
The following example creates a new external access integration huggingface_network_rule for use with a Notebook:
See Creating and using an external access integration for more information.
Once your external access integration is created, attach it to your Notebook and have access to the Hugging Face model repository to download the weights and configurations of the model. See Set up external access for Snowflake Notebooks for more information.
Model Registry API¶
When calling log_model(), the options dictionary supports the following keys:
Option key |
Description |
Type |
|---|---|---|
|
A list of methods available on the model object. Hugging Face models use the object’s |
|
|
The version of the CUDA runtime to be used when deploying to a platform with a GPU. If set to |
|
The model registry infers the signatures argument if the pipeline contains a task from the following list:
question-answering (single output, multiple outputs)
text-classification (single output, multiple outputs)
sentiment-analysis (single output, multiple outputs)
translation_xx_to_yy, where
xxandyyare two-letter country codes defined in ISO 3166-1 alpha-2
Note
Task names are case-sensitive.
The sample_input_data argument to log_model is ignored for Hugging Face models. Specify the signatures argument
when logging a Hugging Face model that is not in the preceding list so that the registry knows the signatures of the target
methods.
To see the inferred signature, call the show_functions() method. This signature gives you the required types and column names for model function input, as well as the format of its output. The following example shows the signature for the model bigscience/bloom-560m with a task of text-generation:
The following example shows how to invoke a model using the previous signature:
Usage notes¶
Many Hugging Face models are large and don’t fit in a standard warehouse. Use a Snowpark-optimized warehouse or choose a smaller version of the model. For example, an alternative to the
Llama-2-70b-chat-hfmodel isLlama-2-7b-chat-hf.Snowflake warehouses do not have GPUs. Use only CPU-optimized Hugging Face models.
Some Hugging Face transformers return an array of dictionaries per input row. The model registry converts this array of dictionaries to a string containing a JSON representation of the array. For example, multi-output Question Answering output looks similar to this:
Example¶
Result:
Inferred signatures for Hugging Face pipelines¶
This section describes the inferred signatures for supported Hugging Face pipelines, including a description and example of the required inputs and expected outputs. All inputs and outputs are Snowpark DataFrames.
Fill-mask pipeline¶
A pipeline whose task is “fill-mask” has the following inputs and outputs.
Inputs¶
inputs: A string where there is a mask to fill.
Example:
Outputs¶
outputs: A string that contains a JSON representation of a list of objects, each of which may contain keys such asscore,token,token_str, orsequence. For details, see FillMaskPipeline.
Example:
Code Example¶
Token classification¶
A pipeline whose task is “ner” or token-classification has the following inputs and outputs.
Inputs¶
inputs: A string that contains the tokens to be classified.
Example:
Outputs¶
outputs: A string that contains a JSON representation of a list of result objects, each of which may contain keys such asentity,score,index,word,name,start, orend. For details, see TokenClassificationPipeline.
Example:
Code Example¶
Question answering (single output)¶
A pipeline whose task is “question-answering”,
where top_k is either unset or set to 1, has the following inputs and outputs.
Inputs¶
question: A string that contains the question to answer.context: A string that may contain the answer.
Example:
Outputs¶
score: Floating-point confidence score from 0.0 to 1.0.start: Integer index of the first token of the answer in the context.end: Integer index of the last token of the answer in the original context.answer: A string that contains the found answer.
Example:
Code Example¶
Question answering (multiple outputs)¶
A pipeline whose task is “question-answering”,
where top_k is set and is larger than 1, has the following inputs and outputs.
Inputs¶
question: A string that contains the question to answer.context: A string that may contain the answer.
Example:
Outputs¶
outputs: A string that contains a JSON representation of a list of result objects, each of which may contain keys such asscore,start,end, oranswer.
Example:
Code Example¶
Summarization¶
A pipeline whose task is “summarization”,
where return_tensors is False or unset, has the following inputs and outputs.
Inputs¶
documents: A string that contains text to summarize.
Example:
Outputs¶
summary_text: A string that contains the generated summary, or, ifnum_return_sequencesis greater than 1, a string that contains a JSON representation of a list of results, each of which is a dictionary that contains fields, includingsummary_text.
Example:
Code Example¶
Table question answering¶
A pipeline whose task is “table-question-answering” has the following inputs and outputs.
Inputs¶
query: A string that contains the question to be answered.table: A string that contains a JSON-serialized dictionary in the form{column -> [values]}representing the table that may contain an answer.
Example:
Outputs¶
answer: A string that contains a possible answer.coordinates: A list of integers that represent the coordinates of the cells where the answer was located.cells: A list of strings that contain the content of the cells where the answer was located.aggregator: A string that contains the name of the aggregator used.
Example:
Code Example¶
Text classification (single output)¶
A pipeline whose task is
“text-classification” or “sentiment-analysis”,
where top_k is not set or is None,
has the following inputs and outputs.
Inputs¶
text: A string to classify.text_pair: A string to classify along withtext, and which is used with models that compute text similarity. Leave empty if the model does not use it.
Example:
Outputs¶
label: A string that represents the classification label of the text.score: A floating-point confidence score from 0.0 to 1.0.
Example:
Code Example¶
Text classification (multiple output)¶
A pipeline whose task is
“text-classification” or “sentiment-analysis”,
where top_k is set to a number,
has the following inputs and outputs.
Note
A text classification task is considered multiple-output if top_k is set to any number, even if that number is 1.
To get a single output, use a top_k value of None.
Inputs¶
text: A string to classify.text_pair: A string to classify along withtext, which is used with models that compute text similarity. Leave empty if the model does not use it.
Example:
Outputs¶
outputs: A string that contains a JSON representation of a list of results, each of which contains fields that includelabelandscore.
Example:
Code Example¶
Text-to-text generation¶
A pipeline whose task is
“text2text-generation”,
where return_tensors is False or unset,
has the following inputs and outputs.
Inputs¶
inputs: A string that contains a prompt.
Example:
Outputs¶
generated_text : A string that contains the generated text if
num_return_sequencesis 1, or if num_return_sequences is greater than 1, a string representation of a JSON list of result dictionaries that contain fields includinggenerated_text.
Example:
Code Example¶
Note
Text-to-text generation pipelines where return_tensors is True are not supported.
Translation generation¶
A pipeline whose task is
“translation”,
where return_tensors is False or unset,
has the following inputs and outputs.
Note
Translation generation pipelines where return_tensors is True are not supported.
Inputs¶
inputs: A string that contains text to translate.
Example:
Outputs¶
translation_text: A string that represents generated translation ifnum_return_sequencesis 1, or a string representation of a JSON list of result dictionaries, each containing fields that includetranslation_text.
Example:
Code Example¶
Zero-shot classification¶
A pipeline whose task is “zero-shot-classification” has the following inputs and outputs.
Inputs¶
sequences: A string that contains the text to be classified.candidate_labels: A list of strings that contain the labels to be applied to the text.
Example:
Outputs¶
sequence: The input string.labels: A list of strings that represent the labels that were applied.scores: A list of floating-point confidence scores for each label.
Example:
Text generation¶
A pipeline whose task is
“text-generation”,
where return_tensors is False or unset,
has the following inputs and outputs.
Note
Text generation pipelines where return_tensors is True are not supported.
Inputs¶
inputs: A string that contains a prompt.
Example:
Outputs¶
outputs: A string that contains a JSON representation of a list of result objects, each of which contains fields that includegenerated_text.
Example:
Code Example¶
Text generation (OpenAI-compatible)¶
A pipeline whose task is
“text-generation”,
where return_tensors is False or unset,
has the following inputs and outputs.
By providing the snowflake.ml.model.openai_signatures.OPENAI_CHAT_SIGNATURE signature, while logging the model, the model will be compatible with the OpenAI API. This allows the users to pass openai.client.ChatCompletion style requests to the model.
Note
Text generation pipelines where return_tensors is True are not supported.
Inputs¶
messages: A list of dictionaries that contain the messages to be sent to the model.max_completion_tokens: The maximum number of tokens to generate.temperature: The temperature to use for the generation.stop: The stop sequence to use for the generation.n: The number of generations to produce.stream: Whether to stream the generation.top_p: The top p value to use for the generation.frequency_penalty: The frequency penalty to use for the generation.presence_penalty: The presence penalty to use for the generation.
Example:
Outputs¶
outputs: A string that contains a JSON representation of a list of result objects, each of which contains fields that includegenerated_text.
Example: