Execute Prompt

POST
https://predict.vellum.ai/v1/execute-prompt
Executes a deployed Prompt and returns the result.

Request

This endpoint expects an object.
inputs
list of unions
The list of inputs defined in the Prompt's deployment with their corresponding values.
prompt_deployment_id
optional string
The ID of the Prompt Deployment. Must provide either this or prompt_deployment_name.
prompt_deployment_name
optional string
The name of the Prompt Deployment. Must provide either this or prompt_deployment_id.
release_tag
optional string
Optionally specify a release tag if you want to pin to a specific release of the Prompt Deployment
external_id
optional string
"Optionally include a unique identifier for tracking purposes. Must be unique for a given prompt deployment.
expand_meta
optional object
The name of the Prompt Deployment. Must provide either this or prompt_deployment_id.
raw_overrides
optional object
expand_raw
optional list of strings

Returns the raw API response data sent from the model host. Combined with raw_overrides, it can be used to access new features from models.

metadata
optional map from strings to any

Response

This endpoint return a union.
FULFILLED
OR
REJECTED
POST
/v1/execute-prompt
curl -X POST https://predict.vellum.ai/v1/execute-prompt \
-H "X_API_KEY: <apiKey>" \
-H "Content-Type: application/json" \
-d '{
"inputs": [
{
"type": "STRING",
"name": "string",
"value": "string"
}
]
}'
Response
{
"state": "FULFILLED",
"execution_id": "string",
"outputs": [
{
"type": "STRING",
"value": "string"
}
],
"meta": {
"model_name": "string",
"latency": 0,
"deployment_release_tag": "string",
"prompt_version_id": "string",
"finish_reason": "LENGTH"
},
"raw": {
"string": {}
}
}