워크플로 예제¶
이 페이지에서는 Snowpark Container Services(SPCS)를 사용하여 실시간 추론을 위한 머신 러닝 모델을 배포하기 위한 워크플로 예제를 제공합니다. 각 예제에서는 모델 등록부터 배포 및 추론에 이르는 전체 수명 주기를 보여줍니다.
여기에는 다음이 포함됩니다.
HTTP 엔드포인트를 통해 서비스를 만들고, 예측하고, 모델에 액세스하는 방법.
다양한 모델 아키텍처(XGBoost, Hugging Face 변환기, PyTorch) 및 컴퓨팅 옵션(CPU및 GPU)을 사용하는 방법.
CPU 기반 추론을 위한 XGBoost 모델 배포하기¶
다음 코드는 다음을 수행합니다.
SPCS에 추론을 위한 XGBoost 모델 배포
추론을 위해 배포된 모델 사용.
from snowflake.ml.registry import registry
from snowflake.ml.utils.connection_params import SnowflakeLoginOptions
from snowflake.snowpark import Session
from xgboost import XGBRegressor
# your model training code here output of which is a trained xgb_model
# Open model registry
reg = registry.Registry(session=session, database_name='my_registry_db', schema_name='my_registry_schema')
# Log the model in Snowflake Model Registry
model_ref = reg.log_model(
model_name="my_xgb_forecasting_model",
version_name="v1",
model=xgb_model,
conda_dependencies=["scikit-learn","xgboost"],
sample_input_data=pandas_test_df,
comment="XGBoost model for forecasting customer demand"
)
# Deploy the model to SPCS
model_ref.create_service(
service_name="forecast_model_service",
service_compute_pool="my_cpu_pool",
ingress_enabled=True)
# See all services running a model
model_ref.list_services()
# Run on SPCS
model_ref.run(pandas_test_df, function_name="predict", service_name="forecast_model_service")
# Delete the service
model_ref.delete_service("forecast_model_service")
HTTP를 통해 호출(외부 애플리케이션)¶
이 모델은 수신이 활성화되어 있으므로(ingress_enabled=True) 해당 공용 HTTP 엔드포인트를 호출할 수 있습니다. 다음 예에서는 환경 변수 :code:`PAT_TOKEN`에 저장된 PAT를 사용하여 공용 Snowflake 엔드포인트로 인증합니다.
import os
import json
import numpy as np
from pprint import pprint
import requests
def get_headers(pat_token):
headers = {'Authorization': f'Snowflake Token="{pat_token}"'}
return headers
headers = get_headers(os.getenv("PAT_TOKEN"))
# Put the endpoint url with method name `predict`
# The endpoint url can be found with `show endpoints in service <service_name>`.
URL = 'https://<random_str>-<organization>-<account>.snowflakecomputing.app/predict'
# Prepare data to be sent
data = {"data": np.column_stack([range(pandas_test_df.shape[0]), pandas_test_df.values]).tolist()}
# Send over HTTP
def send_request(data: dict):
output = requests.post(URL, json=data, headers=headers)
assert (output.status_code == 200), f"Failed to get response from the service. Status code: {output.status_code}"
return output.content
# Test
results = send_request(data=data)
print(json.loads(results))
GPU 기반 추론을 위한 Hugging Face 문장 변환기 배포하기¶
다음 코드는 HTTP 엔드포인트를 포함한 Hugging Face 문장 변환기를 학습시키고 배포합니다.
이 예제에는 sentence-transformers 패키지, GPU 컴퓨팅 풀 및 이미지 리포지토리가 필요합니다.
from snowflake.ml.registry import registry
from snowflake.ml.utils.connection_params import SnowflakeLoginOptions
from snowflake.snowpark import Session
from sentence_transformers import SentenceTransformer
session = Session.builder.configs(SnowflakeLoginOptions("connection_name")).create()
reg = registry.Registry(session=session, database_name='my_registry_db', schema_name='my_registry_schema')
# Take an example sentence transformer from HF
embed_model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
# Have some sample input data
input_data = [
"This is the first sentence.",
"Here's another sentence for testing.",
"The quick brown fox jumps over the lazy dog.",
"I love coding and programming.",
"Machine learning is an exciting field.",
"Python is a popular programming language.",
"I enjoy working with data.",
"Deep learning models are powerful.",
"Natural language processing is fascinating.",
"I want to improve my NLP skills.",
]
# Log the model with pip dependencies
pip_model = reg.log_model(
embed_model,
model_name="sentence_transformer_minilm",
version_name="pip",
sample_input_data=input_data, # Needed for determining signature of the model
pip_requirements=["sentence-transformers", "torch", "transformers"], # If you want to run this model in the Warehouse, you can use conda_dependencies instead
)
# Force Snowflake to not try to check warehouse
conda_forge_model = reg.log_model(
embed_model,
model_name="sentence_transformer_minilm",
version_name="conda_forge_force",
sample_input_data=input_data,
# setting any package from conda-forge is sufficient to know that it can't be run in warehouse
conda_dependencies=["sentence-transformers", "conda-forge::pytorch", "transformers"]
)
# Deploy the model to SPCS
pip_model.create_service(
service_name="my_minilm_service",
service_compute_pool="my_gpu_pool", # Using GPU_NV_S - smallest GPU node that can run the model
ingress_enabled=True,
gpu_requests="1", # Model fits in GPU memory; only needed for GPU pool
max_instances=4, # 4 instances were able to run 10M inferences from an XS warehouse
)
# See all services running a model
pip_model.list_services()
# Run on SPCS
pip_model.run(input_data, function_name="encode", service_name="my_minilm_service")
# Delete the service
pip_model.delete_service("my_minilm_service")
SQL 에서 다음과 같이 서비스 함수를 호출할 수 있습니다.
SELECT my_minilm_service!encode('This is a test sentence.');
마찬가지로 HTTP 엔드포인트를 다음과 같이 호출할 수 있습니다.
import json
from pprint import pprint
import requests
# Put the endpoint url with method name `encode`
URL='https://<random_str>-<account>.snowflakecomputing.app/encode'
# Prepare data to be sent
data = {
'data': []
}
for idx, x in enumerate(input_data):
data['data'].append([idx, x])
# Send over HTTP
def send_request(data: dict):
output = requests.post(URL, json=data, headers=headers)
assert (output.status_code == 200), f"Failed to get response from the service. Status code: {output.status_code}"
return output.content
# Test
results = send_request(data=data)
pprint(json.loads(results))
GPU 기반 추론을 위한 PyTorch 모델 배포하기¶
GPU 추론을 위해 PyTorch 딥러닝 추천 모델(DLRM)을 학습시키고 SPCS에 배포하는 방법에 대한 예제는 이 `빠른 시작 <https://quickstarts.snowflake.com/guide/snowpark-container-services-model-serving-guide/>`_을 참조하세요.
Snowpark ML 모델링 모델 배포¶
Snowpark ML 모델링 클래스를 사용하여 개발된 모델은 GPU가 있는 환경에는 배포할 수 없습니다. 해결 방법으로 네이티브 모델을 추출하여 배포할 수 있습니다. 예:
# Train a model using Snowpark ML
from snowflake.ml.modeling.xgboost import XGBRegressor
regressor = XGBRegressor(...)
regressor.fit(training_df)
# Extract the native model
xgb_model = regressor.to_xgboost()
# Test the model with pandas dataframe
pandas_test_df = test_df.select(['FEATURE1', 'FEATURE2', ...]).to_pandas()
xgb_model.predict(pandas_test_df)
# Log the model in Snowflake Model Registry
mv = reg.log_model(xgb_model,
model_name="my_native_xgb_model",
sample_input_data=pandas_test_df,
comment = 'A native XGB model trained from Snowflake Modeling API',
)
# Now we should be able to deploy to a GPU compute pool on SPCS
mv.create_service(
service_name="my_service_gpu",
service_compute_pool="my_gpu_pool",
image_repo="my_repo",
max_instances=1,
gpu_requests="1",
)