Streamline AI App Development with Vellum’s Workflows
About Workflows
Workflows help you quickly prototype, deploy, and manage complex chains of LLM calls and business logic. We solve the “whack-a-mole” problem encountered by companies that use popular open source frameworks to build AI applications, but are scared to make changes for fear of introducing regressions in production.
The Workflows UI consists of a graphical app builder where you can string together various nodes and test various input values through this system. Each prompt can also be tested extensively through Playground & Test Suites. When implemented effectively, Workflows can help you build advanced LLM applications
Connecting Workflow Nodes and Defining Variables
Workflow nodes are connected by linking the output of one node to the input of another node. For any node the variables can be populated either by the results of an upstream node or the values of global variables.
When 2 nodes are successfully connected there’s a solid purple line between the nodes and the connection points turn blue. Here’s an example of a workflow that’s connected successfully:
Running a Workflow
Each variable in a node can either take the value of an upstream node or the value can be defined globally. To define them globally, you can populate them in the Input Variables dropdown before running a workflow. You can define as many scenarios as you want, each scenario is a unique set of input values that will be sent to the workflow.
Variables can be added one-by-one using the Add button or automatically using Auto-Add. Auto-Add looks at all the variables in the workflow and adds them to the scenario.
Once all the variables are selected for each prompt (either as values of upstream nodes or defined globally), you are now ready to Run your workflow!
When you Run the Workflow (purple button on the top right corner), you will see the execution path of the Workflow in green and the intermediate results at each step of the workflow. If the results at the end of the Workflow look surprising then may be a good idea to check what the responses look like at each step.
Here’s an example of a workflow that’s executed successfully:
Node Mocking
Workflow development is best done iteratively. However, this can become prohibitively expensive both in terms of token consumption and runtime if there are Prompt Nodes defined early in the Workflow that you have to frequently re-run just to get to the part of the Workflow that you actually want to test. To help speed up Workflow development, you can mock out the execution of a given node. This will skip the node’s execution and return the hard-coded output(s) you define rather than running the node itself.
Once defined, you can easily toggle the mock on and off to go back and forth between mocking the node and actually executing the Prompt to see your Workflow work end-to-end. This also allows you to save your mocks without needing to delete them when you’d like to actually execute the node. During a workflow run, nodes that are mocked will be outlined in yellow to differentiate from nodes that are actually executed.
These mocks are only defined within the context of Workflow Sandboxes, and are defined per Scenario. They do not get deployed with your Workflow Deployments and do not affect behavior when invoking Workflow Deployment APIs.
The following nodes support mocking:
- Prompt Nodes
- Subworkflow Nodes
Check out the video below for a full demo of Workflow Node Mocking.