Changelog | May, 2025
Vertex AI Claude 4 Models via Vellum
May 27th, 2025
We now support both of Anthropic’s newest Claude 4 Models hosted on Vertex AI.
AWS Bedrock Claude 4 Models via Vellum
May 27th, 2025
We now support both of Anthropic’s newest Claude 4 Models hosted on AWS Bedrock.
Global Search Navigation
May 27th, 2025
You can now navigate Vellum using global search and keyboard shortcuts. Open global search by clicking the search bar or using cmd-K, then press /
to display available navigation options and quickly jump to different sections of the platform without using your mouse.
The global search supports:
- Search functionality: Find any type of entity — Prompt, Workflow, Test Suite, or Document Index
- Quick navigation: Jump to specific pages by typing ”/” followed by the page name
- Keyboard shortcuts: Navigate entirely with your keyboard for improved efficiency
Model Icons
May 22nd, 2025
We now show the logo of the LLM Provider alongside models throughout Vellum, making it easier to distinguish one model from another.
Anthropic’s Newest Claude 4 Models via Vellum
May 22nd, 2025
We now support both of Anthropic’s newest Claude 4 Models released on May 22nd, 2025.
- Claude Opus 4
- Claude Sonnet 4
Simplified Side Navigation
May 20th, 2025
We’ve simplified Vellum’s side navigation to remove a level of nesting and bring what matters most, front and center. Now, when working on a Prompt or Workflow, you can navigate between its Sandbox, Evaluation, and Deployments, all from within the main page, rather than rely on nested navigation within the side nav.
With this update, we’ve also made broader sweeping changes to page layouts overall, making all pages consistent with their breadcrumbs, menus, and action buttons for a more intuitive user experience.
AI-Powered Prompt Improver (Beta)
May 19th, 2025
You can now use AI to generate improved prompts from within Prompt Sandboxes. This feature uses Anthropic under the hood and generates new prompts that follow prompting best practices.
The Prompt Improver works particularly well for Anthropic models, but should apply to models from other providers, too. To use this feature:
- Click the “Use AI to improve this prompt” button in your Prompt Sandbox
- Wait for the improved prompt to be generated (up to 5 minutes)
- Review the diff showing changes between your original prompt and the improved version
- Click “Apply” to replace your existing prompt with the improved version
This feature is in beta and we welcome feedback. Note that this feature must be enabled by an organization admin and you must be comfortable with using Anthropic as a data subprocessor.
Expression Inputs on Final Output Nodes
May 13th, 2025
Previously, Final Output Nodes only allowed you to reference values like Node Outputs, Inputs, etc. Now, you can do more with these values and write an Expression to define the Final Output you want. For example, you can use the accessor operator to reference an attribute in a json output from a Prompt Node.
Smart Labels for Node Input Variables
May 9th, 2025
Previously, Node input labels remained static regardless of their content. Now, input labels automatically update to reflect the value or expression they contain, making your Workflows more intuitive and self-documenting. You can set an explicit value at any time, at which point it will no longer auto-update.
Configurable Data Retention Policies
May 7th, 2025
Enterprise customers can now configure data retention policies for their organization. This new feature allows you to:
- Set whether monitoring data is retained indefinitely (default) or for a specific time period
- Choose from predefined retention periods (30, 60, 90, or 365 days)
Data retention settings can be configured from the Organization Settings page under Advanced Settings.
DeepSeek v3 to Azure AI Foundry
May 2nd, 2025
We now support the self-hosted DeepSeek v3 via Azure AI Foundry to Vellum.
GPT 4.1 and 4.5 Model via Azure OpenAI
May 2nd, 2025
We now support the self-hosted GPT 4.1 and GPT 4.5 models via Azure OpenAI to Vellum.
OpenAI Base64 PDF Files
May 2nd, 2025
We now support OpenAI Models newly added capability to use Base64 documents within API requests.
Structured Outputs and Json Mode Support for x AI Models
May 2nd, 2025
We have added the ability to have Grok models that are hosted on xAI to return responses in JSON format via an agnostic JSON Mode, or a formatted Structured Output.
Microsoft OmniParser V2 Azure AI Foundry
May 2nd, 2025
We’ve added support for Microsofts OmniParser V2 via Azure AI Foundry to Vellum
Gemini Vertex AI Models Are Now Region Specific
May 2nd, 2025
Previously, Gemini Vertex AI Models would be added to your account regionally agnostic. In the configuration of the model you would have to specify which region you would like to utilize the model in, therefore limiting you to one instance of the model.
We have pivoted to creating region specific instances for you to select. This allows you the option of enabling your model in different regions so you can set your workflows up for success with fallbacks to different regions.