PromptVertexAI 2025.10.9.21

Bundle

com.snowflake.openflow.runtime | runtime-vertexai-nar

Description

Sends a prompt to VertexAI, writing the response either as a FlowFile attribute or to the contents of the incoming FlowFile. The prompt may consist of pure text interaction or may include multimedia.

Tags

ai, chat, cloud, gcp, google, image, openflow, pdf, prompt, text, video

Input Requirement

Supports Sensitive Dynamic Properties

false

Properties

PropertyDescription
GCP Credentials ServiceThe Controller Service used to obtain Google Cloud Platform credentials.
GCP LocationThe location to configure the Vertex client with
GCP Project IDThe project ID to configure the Vertex client with
Max File SizeThe maximum size of a FlowFile that can be sent to Vertex as an image. If the FlowFile is larger than this, it will be routed to ‘failure’.
Max TokensThe maximum number of tokens to generate
Media MIME TypeThe MIME type of the media in the FlowFile content. Supported media types are listed here: https://firebase.google.com/docs/vertex-ai/input-file-requirements
Model NameThe name of the Vertex model
Output StrategyDetermines response output destination
Prompt TypeThe type of prompt to send to Vertex. Text to send a simple prompt. Media to send a multimedia type first followed by a text prompt.
Response FormatThe format of the response from Vertex
Results AttributeThe name of the attribute to write the response to.
Stop SequencesA comma delimited list of strings act as stop sequences. The model will halt after encountering one of the stop sequences.
System MessageThe system message to send to Vertex. FlowFile attributes may be referenced via Expression Language, and the contents of the FlowFile may be referenced via the flowfile_content variable. E.g., ${flowfile_content}
TemperatureThe temperature to use for generating the response. Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks.
Top KThe top K value to use for generating the response. Only sample from the top K options for each subsequent token. Recommended for advanced use cases only. You usually only need to use temperature.
Top PThe top P value to use for generating the response. Top P is for nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. Recommended for advanced use cases only. You usually only need to use temperature.
User MessageThe user message to send to Vertex. FlowFile attributes may be referenced via Expression Language, and the contents of the FlowFile may be referenced via the flowfile_content variable. E.g., ${flowfile_content}. The user message is added first, unless an image is present.

Relationships

NameDescription
failureIf unable to obtain a valid response from Vertex, the original FlowFile will be routed to this relationship
successThe response from Vertex is routed to this relationship

Writes attributes

NameDescription
vertex.usage.inputTokensThe number of input tokens read in the request.
vertex.usage.outputTokensThe number of output tokens generated in the response.
vertex.chat.completion.idA unique id assigned to the conversation
mime.typeThe mime type of the response.
filenameAn updated filename for the response.