Retrieve Provider Payload

Beta
POST

Given a set of input variable values, compile the exact payload that Vellum would send to the configured model provider for execution if the execute-prompt endpoint had been invoked. Note that this endpoint does not actually execute the prompt or make an API call to the model provider.

This endpoint is useful if you don’t want to proxy LLM provider requests through Vellum and prefer to send them directly to the provider yourself. Note that no guarantees are made on the format of this API’s response schema, other than that it will be a valid payload for the configured model provider. It’s not recommended that you try to parse or derive meaning from the response body and instead, should simply pass it directly to the model provider as is.

We encourage you to seek advise from Vellum Support before integrating with this API for production use.

Request

This endpoint expects an object.
inputslist of objectsRequired

The list of inputs defined in the Prompt’s deployment with their corresponding values.

deployment_idstringOptional

The ID of the deployment. Must provide either this or deployment_name.

deployment_namestringOptional

The name of the deployment. Must provide either this or deployment_id.

release_tagstringOptional

Optionally specify a release tag if you want to pin to a specific release of the Workflow Deployment

expand_metaobjectOptional

Response

This endpoint returns an object.
payloadmap from strings to any or string
metaobjectOptional

The subset of the metadata tracked by Vellum during Prompt Deployment compilation that the request opted into with expand_meta.