Changelog | March, 2025

File Hosting for Images and PDFs

March 11th, 2025

Until now, when you provided an image to a Prompt in Vellum, it’d either have to be a public URL or the base64 encoded representation of the image. With this update, we now support secure file hosting, such that when you drag-and-drop an image (and also now PDFs!) into a Prompt, Vellum securely hosts the file on your behalf.

The end result is that you can now provide much larger images and PDFs to LLMs within Vellum without worrying about a decrease in performance or page load times.

PDFs as a Prompt Input

March 11th, 2025

Recently, certain model providers like Anthropic have begun supporting PDFs as native LLM inputs using a special content type called document (check out their docs for details here.

This is similar to how you might provide a multi-modal model with an image as an input, but now you can provide a PDF as well.

Vellum now also supports passing PDFs as inputs to a Prompt for models that support it. You can do this by drag-and-dropping a PDF file into a Chat History variable in a Prompt. The mechanics are very similar to how you might work with images (see details here).

This is particularly useful for data extraction tasks, where you might want to extract structured data from a PDF and then use that data to power some downstream process.

Support for Qwen QwQ Models via Groq

March 11th, 2025

We’ve added support for a variety of Qwen’s newest QwQ 32B models via Groq’s preview models.

We’ve added the following models:

  • QwQ 32B
  • QwQ 2.5 Coder 32B
  • QWQ 2.5 32B

Support for Qwen QwQ 32B via Fireworks AI

March 11th, 2025

We’ve added support for Qwen’s newest QwQ 32B model via Fireworks AI.

Webhooks

March 10th, 2025

It’s now possible to receive real-time updates about actions taking place in Vellum using Webhooks.

Webhooks

From the organization settings page, you can configure a webhook endpoint with a custom list of events you care about. You can further customize it with your own auth configurations.

This is useful if you’d like to store Vellum monitoring data in your own external data stores. For example, you might save events to a Data Warehouse to power a custom health dashboard.

Keep an eye out, as more event types will be added soon!

Workflow Deployment Executions - Cost Column

March 7th, 2025

You can now see the total cost per Workflow Execution for a given Workflow Deployment in its Executions table. This toggle can be shown/hidden via the “columns” menu.

Cost Column

This is useful for getting a sense of how much a given AI use-case costs to support. This column will be populated for new Workflow Executions going forward and sums the costs associated with all Prompt invocations within the Workflow’s execution (included nested invocations within Subworkflow Nodes, Map Nodes, etc.).

We’ll be exposing more cost metrics throughout Vellum in the coming weeks. Stay tuned!

Prompt Sandbox Pagination

March 5th, 2025

We’ve added pagination to the Prompt Sandbox page. Now, when you have a large number of Scenarios in a Prompt Sandbox, they’ll be split across multiple pages. You can navigate between pages using the pagination controls at the bottom

Prompt Sandbox Pagination

This should result in performance improvements for those with large Prompt Sandboxes.

March 3rd, 2025

We’ve added an eagerly-awaited-for and long-overdue feature to Vellum – Global Search 🎉 You can now search across all your Prompts, Workflows, Document Indexes, and more using the new Search bar in the Vellum side nav.

Global Search Side Nav

Doing so will pull up a search bar where you can search for any resource in your Workspace. You can directly navigate to the resource from the search results.

Global Search Omnibox

You can also access Global Search from any page through the keyboard shortcut Cmd/Ctrl + K. Give it a try and let us know what you think!