Changelog | January, 2025
Prompt Node Cost and Model Name in Workflows
You can now see token cost and a model name in Prompt Node results
when invoking a Workflow Deployment via API, by passing in True
to the expand_meta.cost
or the expand_meta.model_name
parameter on either Execute Workflow endpoints.
Support for Gemini 2.0 Flash Thinking Mode
January 15th, 2025
We’ve added support for the Gemini 2.0 Flash Thinking Mode model.
Workflow Outputs Panel
January 10th, 2024
You’ll now see a new “Outputs” button that opens up the new “Workflows Outputs” panel in a Workflow Sandbox.
From here, you can see all the outputs that your Workflow produces, and easily navigate to the Nodes that produce them. In the coming weeks, you’ll also be able to directly edit your Workflow’s outputs from this panel.
Newly Added Support for DeepSeek Models
January 2nd, 2024
We’ve added support for DeepSeek AI models.
Along with the launch of the DeepSeek integration, we’ve added support for DeepSeek V3 Chat.
Function Call Inputs in Chat Messages
January 2nd, 2025
There is now first-class support for Function Call inputs to Chat Messages. This allows you to simulate the behavior of a Function Call output from a model in Vellum as part of a message in Chat History.
To see this feature in action, check out the video below: