Hugging Face pipelineΒΆ

The Model Registry egistry supports Hugging Face model classes defined as transformers that derive from transformers.Pipeline. For example:

lm_hf_model = transformers.pipeline(
    task="text-generation",
    model="bigscience/bloom-560m",
    token="...",  # Put your HuggingFace token here.
    return_full_text=False,
    max_new_tokens=100,
)

lmv = reg.log_model(lm_hf_model, model_name='bloom', version_name='v560m')
Copy

The following additional options can be used in the options dictionary when you call log_model:

Option

Description

target_methods

A list of the names of the methods available on the model object. Hugging Facemodels have the following target methods by default, assuming the method exists: __call__.

cuda_version

The version of the CUDA runtime to be used when deploying to a platform with a GPU; defaults to 11.8. If manually set to None, the model cannot be deployed to a platform having a GPU.

Important

A model based on huggingface_pipeline.HuggingFacePipelineModel contains only configuration data; the model weights are downloaded from the Hugging Face Hub each time you use the model.

Currently, the model registry supports only self-contained models that are ready to run without external network access. The best practice is to instead use transformers.Pipeline as shown in the example above. This downloads model weights to your local system, and log_model then uploads a self-contained model object that does not need internet access.

The registry infers the signatures argument only if the pipeline contains one task from the following list:

  • conversational

  • fill-mask

  • question-answering

  • summarization

  • table-question-answering

  • text2text-generation

  • text-classification (also called sentiment-analysis)

  • text-generation

  • token-classification (also called ner)

  • translation

  • translation_xx_to_yy

  • zero-shot-classification

The sample_input_data argument to log_model is completely ignored for Hugging Face models. Specify the signatures argument when logging a Hugging Face model that is not in the above list so that the registry knows the signatures of the target methods.

To see the inferred signature, use the show_functions method. The following dictionary, for example, is the result of lmv.show_functions() where lmv is the model logged above:

{'name': '__CALL__',
  'target_method': '__call__',
  'signature': ModelSignature(
                      inputs=[
                          FeatureSpec(dtype=DataType.STRING, name='inputs')
                      ],
                      outputs=[
                          FeatureSpec(dtype=DataType.STRING, name='outputs')
                      ]
                  )}]
Copy

Use the following code to call the lmv model:

import pandas as pd
remote_prediction = lmv.run(pd.DataFrame(["Hello, how are you?"], columns=["inputs"]))
Copy

Usage notesΒΆ

  • Many Hugging Face models are large and do not fit in a standard warehouse. Use a Snowpark-optimized warehouse or choose a smaller version of the model. For example, instead of using the Llama-2-70b-chat-hf model, try Llama-2-7b-chat-hf.

  • Snowflake warehouses do not have GPUs. Use only CPU-optimized Hugging Face models.

  • Some Hugging Face transformers return an array of dictionaries per input row. The registry converts the array of dictionaries to a string containing a JSON representation of the array. For example, multi-output Question Answering output looks similar to this:

    '[{"score": 0.61094731092453, "start": 139, "end": 178, "answer": "learn more about the world of athletics"},
    {"score": 0.17750297486782074, "start": 139, "end": 180, "answer": "learn more about the world of athletics.\""}]'
    
    Copy

ExampleΒΆ

# Prepare model
import transformers
import pandas as pd

finbert_model = transformers.pipeline(
    task="text-classification",
    model="ProsusAI/finbert",
    top_k=2,
)

# Log the model
mv = registry.log_model(
    finbert_model,
    model_name="finbert",
    version_name="v1",
)

# Use the model
mv.run(pd.DataFrame(
        [
            ["I have a problem with my Snowflake that needs to be resolved asap!!", ""],
            ["I would like to have udon for today's dinner.", ""],
        ]
    )
)
Copy

Result:

0  [{"label": "negative", "score": 0.8106237053871155}, {"label": "neutral", "score": 0.16587384045124054}]
1  [{"label": "neutral", "score": 0.9263970851898193}, {"label": "positive", "score": 0.05286872014403343}]
Copy

Inferred signatures for Hugging Face pipelinesΒΆ

The Snowflake Model Registry automatically infers the signatures of Hugging Face pipelines that contain a single task from the following list:

  • conversational

  • fill-mask

  • question-answering

  • summarization

  • table-question-answering

  • text2text-generation

  • text-classification (alias sentiment-analysis)

  • text-generation

  • token-classification (alias ner)

  • translation

  • translation_xx_to_yy

  • zero-shot-classification

This section describes the signatures of these types of Hugging Face pipelines, including a description and example of the required inputs and expected outputs. All inputs and outputs are Snowpark DataFrames.

Conversational pipelineΒΆ

A pipeline whose task is conversational has the following inputs and outputs.

InputsΒΆ

  • user_inputs: A list of strings that represent the user’s previous and current inputs. The last one in the list is the current input.

  • generated_responses: A list of strings that represent the model’s previous responses.

Example:

---------------------------------------------------------------------------
|"user_inputs"                                    |"generated_responses"  |
---------------------------------------------------------------------------
|[                                                |[                      |
|  "Do you speak French?",                        |  "Yes I do."          |
|  "Do you know how to say Snowflake in French?"  |]                      |
|]                                                |                       |
---------------------------------------------------------------------------

OutputsΒΆ

  • generated_responses: A list of strings that represent the model’s previous and current responses. The last one in the list is the current response.

Example:

-------------------------
|"generated_responses"  |
-------------------------
|[                      |
|  "Yes I do.",         |
|  "I speak French."    |
|]                      |
-------------------------

Fill-mask pipelineΒΆ

A pipeline whose task is β€œfill-mask” has the following inputs and outputs.

InputsΒΆ

  • inputs: A string where there is a mask to fill.

Example:

--------------------------------------------------
|"inputs"                                        |
--------------------------------------------------
|LynYuu is the [MASK] of the Grand Duchy of Yu.  |
--------------------------------------------------

OutputsΒΆ

  • outputs: A string that contains a JSON representation of a list of objects, each of which may contain keys such as score, token, token_str, or sequence. For details, see FillMaskPipeline.

Example:

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|"outputs"                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|[{"score": 0.9066258072853088, "token": 3007, "token_str": "capital", "sequence": "lynyuu is the capital of the grand duchy of yu."}, {"score": 0.08162177354097366, "token": 2835, "token_str": "seat", "sequence": "lynyuu is the seat of the grand duchy of yu."}, {"score": 0.0012052370002493262, "token": 4075, "token_str": "headquarters", "sequence": "lynyuu is the headquarters of the grand duchy of yu."}, {"score": 0.0006560495239682496, "token": 2171, "token_str": "name", "sequence": "lynyuu is the name of the grand duchy of yu."}, {"score": 0.0005427763098850846, "token": 3200, "token_str"...  |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Token classificationΒΆ

A pipeline whose task is β€œner” or β€œtoken-classification” has the following inputs and outputs.

InputsΒΆ

  • inputs: A string that contains the tokens to be classified.

Example:

------------------------------------------------
|"inputs"                                      |
------------------------------------------------
|My name is Izumi and I live in Tokyo, Japan.  |
------------------------------------------------

OutputsΒΆ

  • outputs: A string that contains a JSON representation of a list of result objects, each of which may contain keys such as entity, score, index, word, name, start, or end. For details, see TokenClassificationPipeline.

Example:

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|"outputs"                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|[{"entity": "PRON", "score": 0.9994392991065979, "index": 1, "word": "my", "start": 0, "end": 2}, {"entity": "NOUN", "score": 0.9968984127044678, "index": 2, "word": "name", "start": 3, "end": 7}, {"entity": "AUX", "score": 0.9937735199928284, "index": 3, "word": "is", "start": 8, "end": 10}, {"entity": "PROPN", "score": 0.9928083419799805, "index": 4, "word": "i", "start": 11, "end": 12}, {"entity": "PROPN", "score": 0.997334361076355, "index": 5, "word": "##zumi", "start": 12, "end": 16}, {"entity": "CCONJ", "score": 0.999173104763031, "index": 6, "word": "and", "start": 17, "end": 20}, {...  |

Question answering (single output)ΒΆ

A pipeline whose task is β€œquestion-answering”, where top_k is either unset or set to 1, has the following inputs and outputs.

InputsΒΆ

  • question: A string that contains the question to answer.

  • context: A string that may contain the answer.

Example:

-----------------------------------------------------------------------------------
|"question"                  |"context"                                           |
-----------------------------------------------------------------------------------
|What did Doris want to do?  |Doris is a cheerful mermaid from the ocean dept...  |
-----------------------------------------------------------------------------------

OutputsΒΆ

  • score: Floating-point confidence score from 0.0 to 1.0.

  • start: Integer index of the first token of the answer in the context.

  • end: Integer index of the last token of the answer in the original context.

  • answer: A string that contains the found answer.

Example:

--------------------------------------------------------------------------------
|"score"           |"start"  |"end"  |"answer"                                 |
--------------------------------------------------------------------------------
|0.61094731092453  |139      |178    |learn more about the world of athletics  |
--------------------------------------------------------------------------------

Question answering (multiple outputs)ΒΆ

A pipeline whose task is β€œquestion-answering”, where top_k is set and is larger than 1, has the following inputs and outputs.

InputsΒΆ

  • question: A string that contains the question to answer.

  • context: A string that may contain the answer.

Example:

-----------------------------------------------------------------------------------
|"question"                  |"context"                                           |
-----------------------------------------------------------------------------------
|What did Doris want to do?  |Doris is a cheerful mermaid from the ocean dept...  |
-----------------------------------------------------------------------------------

OutputsΒΆ

  • outputs: A string that contains a JSON representation of a list of result objects, each of which may contain keys such as score, start, end, or answer.

Example:

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|"outputs"                                                                                                                                                                                                                                                                                                                                        |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|[{"score": 0.61094731092453, "start": 139, "end": 178, "answer": "learn more about the world of athletics"}, {"score": 0.17750297486782074, "start": 139, "end": 180, "answer": "learn more about the world of athletics.\""}, {"score": 0.06438097357749939, "start": 138, "end": 178, "answer": "\"learn more about the world of athletics"}]  |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

SummarizationΒΆ

A pipeline whose task is β€œsummarization”, where return_tensors is False or unset, has the following inputs and outputs.

InputsΒΆ

  • documents: A string that contains text to summarize.

Example:

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|"documents"                                                                                                                                                                                               |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|Neuro-sama is a chatbot styled after a female VTuber that hosts live streams on the Twitch channel "vedal987". Her speech and personality are generated by an artificial intelligence (AI) system  wh...  |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

OutputsΒΆ

  • summary_text: A string that contains the generated summary, or, if num_return_sequences is greater than 1, a string that contains a JSON representation of a list of results, each of which is a dictionary that contains fields, including summary_text.

Example:

---------------------------------------------------------------------------------
|"summary_text"                                                                 |
---------------------------------------------------------------------------------
| Neuro-sama is a chatbot styled after a female VTuber that hosts live streams  |
---------------------------------------------------------------------------------

Table question answeringΒΆ

A pipeline whose task is β€œtable-question-answering” has the following inputs and outputs.

InputsΒΆ

  • query: A string that contains the question to be answered.

  • table: A string that contains a JSON-serialized dictionary in the form {column -> [values]} representing the table that may contain an answer.

Example:

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|"query"                                  |"table"                                                                                                                                                                                                                                                   |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|Which channel has the most subscribers?  |{"Channel": ["A.I.Channel", "Kaguya Luna", "Mirai Akari", "Siro"], "Subscribers": ["3,020,000", "872,000", "694,000", "660,000"], "Videos": ["1,200", "113", "639", "1,300"], "Created At": ["Jun 30 2016", "Dec 4 2017", "Feb 28 2014", "Jun 23 2017"]}  |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

OutputsΒΆ

  • answer: A string that contains a possible answer.

  • coordinates: A list of integers that represent the coordinates of the cells where the answer was located.

  • cells: A list of strings that contain the content of the cells where the answer was located.

  • aggregator: A string that contains the name of the aggregator used.

Example:

----------------------------------------------------------------
|"answer"     |"coordinates"  |"cells"          |"aggregator"  |
----------------------------------------------------------------
|A.I.Channel  |[              |[                |NONE          |
|             |  [            |  "A.I.Channel"  |              |
|             |    0,         |]                |              |
|             |    0          |                 |              |
|             |  ]            |                 |              |
|             |]              |                 |              |
----------------------------------------------------------------

Text classification (single output)ΒΆ

A pipeline whose task is β€œtext-classification” or β€œsentiment-analysis”, where top_k is not set or is None, has the following inputs and outputs.

InputsΒΆ

  • text: A string to classify.

  • text_pair: A string to classify along with text, and which is used with models that compute text similarity. Leave empty if the model does not use it.

Example:

----------------------------------
|"text"       |"text_pair"       |
----------------------------------
|I like you.  |I love you, too.  |
----------------------------------

OutputsΒΆ

  • label: A string that represents the classification label of the text.

  • score: A floating-point confidence score from 0.0 to 1.0.

Example:

--------------------------------
|"label"  |"score"             |
--------------------------------
|LABEL_0  |0.9760091304779053  |
--------------------------------

Text classification (multiple output)ΒΆ

A pipeline whose task is β€œtext-classification” or β€œsentiment-analysis”, where top_k is set to a number, has the following inputs and outputs.

Note

A text classification task is considered multiple-output if top_k is set to any number, even if that number is 1. To get a single output, use a top_k value of None.

InputsΒΆ

  • text: A string to classify.

  • text_pair: A string to classify along with text, which is used with models that compute text similarity. Leave empty if the model does not use it.

Example:

--------------------------------------------------------------------
|"text"                                              |"text_pair"  |
--------------------------------------------------------------------
|I am wondering if I should have udon or rice fo...  |             |
--------------------------------------------------------------------

OutputsΒΆ

  • outputs: A string that contains a JSON representation of a list of results, each of which contains fields that include label and score.

Example:

--------------------------------------------------------
|"outputs"                                             |
--------------------------------------------------------
|[{"label": "NEGATIVE", "score": 0.9987024068832397}]  |
--------------------------------------------------------

Text generationΒΆ

A pipeline whose task is β€œtext-generation”, where return_tensors is False or unset, has the following inputs and outputs.

Note

Text generation pipelines where return_tensors is True are not supported.

InputsΒΆ

  • inputs: A string that contains a prompt.

Example:

--------------------------------------------------------------------------------
|"inputs"                                                                      |
--------------------------------------------------------------------------------
|A descendant of the Lost City of Atlantis, who swam to Earth while saying, "  |
--------------------------------------------------------------------------------

OutputsΒΆ

  • outputs: A string that contains a JSON representation of a list of result objects, each of which contains fields that include generated_text.

Example:

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|"outputs"                                                                                                                                                                                                 |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|[{"generated_text": "A descendant of the Lost City of Atlantis, who swam to Earth while saying, \"For my life, I don't know if I'm gonna land upon Earth.\"\n\nIn \"The Misfits\", in a flashback, wh...  |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Text-to-text generationΒΆ

A pipeline whose task is β€œtext2text-generation”, where return_tensors is False or unset, has the following inputs and outputs.

Note

Text-to-text generation pipelines where return_tensors is True are not supported.

InputsΒΆ

  • inputs: A string that contains a prompt.

Example:

--------------------------------------------------------------------------------
|"inputs"                                                                      |
--------------------------------------------------------------------------------
|A descendant of the Lost City of Atlantis, who swam to Earth while saying, "  |
--------------------------------------------------------------------------------

OutputsΒΆ

  • generated_text : A string that contains the generated text if num_return_sequences is 1, or if num_return_sequences is greater than 1, a string representation of a JSON list of result dictionaries that contain fields including generated_text .

Example:

----------------------------------------------------------------
|"generated_text"                                              |
----------------------------------------------------------------
|, said that he was a descendant of the Lost City of Atlantis  |
----------------------------------------------------------------

Translation generationΒΆ

A pipeline whose task is β€œtranslation”, where return_tensors is False or unset, has the following inputs and outputs.

Note

Translation generation pipelines where return_tensors is True are not supported.

InputsΒΆ

  • inputs: A string that contains text to translate.

Example:

------------------------------------------------------------------------------------------------------
|"inputs"                                                                                            |
------------------------------------------------------------------------------------------------------
|Snowflake's Data Cloud is powered by an advanced data platform provided as a self-managed service.  |
------------------------------------------------------------------------------------------------------

OutputsΒΆ

  • translation_text: A string that represents generated translation if num_return_sequences is 1, or a string representation of a JSON list of result dictionaries, each containing fields that include translation_text.

Example:

---------------------------------------------------------------------------------------------------------------------------------
|"translation_text"                                                                                                             |
---------------------------------------------------------------------------------------------------------------------------------
|Le Cloud de donnΓ©es de Snowflake est alimentΓ© par une plate-forme de donnΓ©es avancΓ©e fournie sous forme de service autogΓ©rΓ©s.  |
---------------------------------------------------------------------------------------------------------------------------------

Zero-shot classificationΒΆ

A pipeline whose task is β€œzero-shot-classification” has the following inputs and outputs.

InputsΒΆ

  • sequences: A string that contains the text to be classified.

  • candidate_labels: A list of strings that contain the labels to be applied to the text.

Example:

-----------------------------------------------------------------------------------------
|"sequences"                                                       |"candidate_labels"  |
-----------------------------------------------------------------------------------------
|I have a problem with Snowflake that needs to be resolved asap!!  |[                   |
|                                                                  |  "urgent",         |
|                                                                  |  "not urgent"      |
|                                                                  |]                   |
|I have a problem with Snowflake that needs to be resolved asap!!  |[                   |
|                                                                  |  "English",        |
|                                                                  |  "Japanese"        |
|                                                                  |]                   |
-----------------------------------------------------------------------------------------

OutputsΒΆ

  • sequence: The input string.

  • labels: A list of strings that represent the labels that were applied.

  • scores: A list of floating-point confidence scores for each label.

Example:

--------------------------------------------------------------------------------------------------------------
|"sequence"                                                        |"labels"        |"scores"                |
--------------------------------------------------------------------------------------------------------------
|I have a problem with Snowflake that needs to be resolved asap!!  |[               |[                       |
|                                                                  |  "urgent",     |  0.9952737092971802,   |
|                                                                  |  "not urgent"  |  0.004726255778223276  |
|                                                                  |]               |]                       |
|I have a problem with Snowflake that needs to be resolved asap!!  |[               |[                       |
|                                                                  |  "Japanese",   |  0.5790848135948181,   |
|                                                                  |  "English"     |  0.42091524600982666   |
|                                                                  |]               |]                       |
--------------------------------------------------------------------------------------------------------------