Registered Prompts

Register Prompt

Beta
POST
https://api.vellum.ai/v1/registered-prompts/register
Registers a prompt within Vellum and creates associated Vellum entities. Intended to be used by integration partners, not directly by Vellum users. Under the hood, this endpoint creates a new sandbox, a new model version, and a new deployment.

Request

This endpoint expects an object.
label
string
A human-friendly label for corresponding entities created in Vellum.
name
string
A uniquely-identifying name for corresponding entities created in Vellum.
prompt
object
Information about how to execute the prompt template.
model
string
The initial model to use for this prompt
parameters
object
The initial model parameters to use for this prompt
provider
optional enum

The initial LLM provider to use for this prompt

  • ANTHROPIC - Anthropic
  • AWS_BEDROCK - AWS Bedrock
  • AZURE_OPENAI - Azure OpenAI
  • COHERE - Cohere
  • GOOGLE - Google
  • HOSTED - Hosted
  • MOSAICML - MosaicML
  • OPENAI - OpenAI
  • FIREWORKS_AI - Fireworks AI
  • HUGGINGFACE - HuggingFace
  • MYSTIC - Mystic
  • PYQ - Pyq
  • REPLICATE - Replicate
meta
optional map from strings to any
Optionally include additional metadata to store along with the prompt.

Response

This endpoint return an object.
prompt
object
Information about the generated prompt
sandbox_snapshot
object
Information about the generated sandbox snapshot
sandbox
object
Information about the generated sandbox
model_version
object
Information about the generated model version
prompt_version_id
string
The ID of the generated prompt version
deployment
object
Information about the generated deployment
POST
/v1/registered-prompts/register
curl -X POST https://api.vellum.ai/v1/registered-prompts/register \
-H "X_API_KEY: <apiKey>" \
-H "Content-Type: application/json" \
-d '{
"label": "label",
"name": "name",
"prompt": {
"prompt_block_data": {
"version": 1,
"blocks": [
{
"id": "id",
"block_type": "CHAT_MESSAGE",
"properties": {}
}
]
},
"input_variables": [
{
"key": "key"
}
]
},
"model": "model",
"parameters": {
"temperature": 1.1,
"max_tokens": 1,
"top_p": 1.1,
"frequency_penalty": 1.1,
"presence_penalty": 1.1
}
}'
Response
{
"prompt": {
"id": "id",
"label": "label"
},
"sandbox_snapshot": {
"id": "id"
},
"sandbox": {
"id": "id",
"label": "label"
},
"model_version": {
"id": "id",
"label": "label"
},
"prompt_version_id": "prompt_version_id",
"deployment": {
"id": "id",
"name": "name",
"label": "label"
}
}