Executes a deployed Prompt and streams back the results.
A list consisting of the Prompt Deployment’s input variables and their values.
The ID of the Prompt Deployment. Must provide either this or prompt_deployment_name.
The unique name of the Prompt Deployment. Must provide either this or prompt_deployment_id.
Optionally specify a release tag if you want to pin to a specific release of the Prompt Deployment
Optionally include a unique identifier for tracking purposes. Must be unique within a given Prompt Deployment.
An optionally specified configuration used to opt in to including additional metadata about this prompt execution in the API response. Corresponding values will be returned under the meta
key of the API response.
Overrides for the raw API request sent to the model host. Combined with expand_raw
, it can be used to access new features from models.
A list of keys whose values you’d like to directly return from the JSON response of the model provider. Useful if you need lower-level info returned by model providers that Vellum would otherwise omit. Corresponding key/value pairs will be returned under the raw
key of the API response.
Arbitrary JSON metadata associated with this request. Can be used to capture additional monitoring data such as user id, session id, etc. for future analysis.
The initial data returned indicating that the response from the model has returned and begun streaming.
The data returned for each delta during the prompt execution stream.
The final data event returned indicating that the stream has ended and all final resolved values from the model can be found.
The final data returned indicating an error occurred during the stream.