Execute Prompt

POST
Executes a deployed Prompt and returns the result.

Request

This endpoint expects an object.
inputs
list of unionsRequired
The list of inputs defined in the Prompt's deployment with their corresponding values.
prompt_deployment_id
stringOptional

The ID of the Prompt Deployment. Must provide either this or prompt_deployment_name.

prompt_deployment_name
stringOptional

The name of the Prompt Deployment. Must provide either this or prompt_deployment_id.

release_tag
stringOptional
Optionally specify a release tag if you want to pin to a specific release of the Prompt Deployment
external_id
stringOptional
"Optionally include a unique identifier for tracking purposes. Must be unique for a given prompt deployment.
expand_meta
objectOptional

The name of the Prompt Deployment. Must provide either this or prompt_deployment_id.

raw_overrides
objectOptional
expand_raw
list of stringsOptional

Returns the raw API response data sent from the model host. Combined with raw_overrides, it can be used to access new features from models.

metadata
map from strings to anyOptional

Response

This endpoint returns a union
Fulfilled
OR
Rejected
POST
1curl -X POST https://predict.vellum.ai/v1/execute-prompt \
2 -H "X_API_KEY: <apiKey>" \
3 -H "Content-Type: application/json" \
4 -d '{
5 "inputs": [
6 {
7 "type": "STRING",
8 "name": "string",
9 "value": "string"
10 }
11 ]
12}'
200
Successful
1{
2 "state": "FULFILLED",
3 "execution_id": "string",
4 "outputs": [
5 {
6 "type": "STRING",
7 "value": "string"
8 }
9 ],
10 "meta": {
11 "usage": {
12 "output_token_count": 0,
13 "input_token_count": 0,
14 "input_char_count": 0,
15 "output_char_count": 0,
16 "compute_nanos": 0
17 },
18 "model_name": "string",
19 "latency": 0,
20 "deployment_release_tag": "string",
21 "prompt_version_id": "string",
22 "finish_reason": "LENGTH"
23 },
24 "raw": {
25 "string": {}
26 }
27}