Execute Prompt Stream

POST
https://predict.vellum.ai/v1/execute-prompt-stream
Executes a deployed Prompt and streams back the results.

Request

This endpoint expects an object.
inputs
list of unions
The list of inputs defined in the Prompt's deployment with their corresponding values.
prompt_deployment_id
optional string
The ID of the Prompt Deployment. Must provide either this or prompt_deployment_name.
prompt_deployment_name
optional string
The name of the Prompt Deployment. Must provide either this or prompt_deployment_id.
release_tag
optional string
Optionally specify a release tag if you want to pin to a specific release of the Prompt Deployment
external_id
optional string
"Optionally include a unique identifier for tracking purposes. Must be unique for a given prompt deployment.
expand_meta
optional object
The name of the Prompt Deployment. Must provide either this or prompt_deployment_id.
raw_overrides
optional object
expand_raw
optional list of strings

Returns the raw API response data sent from the model host. Combined with raw_overrides, it can be used to access new features from models.

metadata
optional map from strings to any

Response

This endpoint return any.
POST
/v1/execute-prompt-stream
curl -X POST https://predict.vellum.ai/v1/execute-prompt-stream \
-H "X_API_KEY: <apiKey>" \
-H "Content-Type: application/json" \
-d '{
"inputs": [
{
"type": "STRING",
"name": "string",
"value": "string"
}
]
}'
Response
{
"state": "INITIATED",
"meta": {
"model_name": "string",
"latency": 0,
"deployment_release_tag": "string",
"prompt_version_id": "string"
},
"execution_id": "string"
}