Workflows

Building Common LLM architectures with Vellum Workflows

With a large number of supported node types (full details here: Experimenting with Workflows) and few limits on how they can be connected to each other, the types of architectures/ applications you can create using Workflows is very large.

The list of architectures below is not exhaustive, we’re continuing to build it out. If you come up with an interesting architecture that you think the community might benefit from, please reach out so we can add it to the list here.

1. Create a Retrieval Augmented Generation System

LLM applications often require specific context from a Vector DB which is added into the prompt. Forget signing up for multiple systems and being stuck on various micro decisions, with Vellum you can prototype a RAG system in minutes

  • Step 1: Create a Document Index and upload your documents (follow this article for tips: Uploading Documents)
  • Step 2: Add a Search Node in your Workflow
  • Step 3: Add a Prompt Node that takes the results of your Search Node as an input variable
  • Step 4: Link to a Final Output or other downstream node (e.g., if the Prompt Node result is a certain value branch execution based on a Conditional Node)
  • Step 5: Set up variables and hit Run!

RAG Workflow

2. Route Relevant Messages Dynamically to a Human

If you’re building an agent that answers questions coming from users (e.g., a support chatbot), you may want to set up rules such that anytime the incoming message from a user is sensitive (e.g., the user is angry or in a dangerous situation) then the LLM automatically escalates it to a human. With Workflows you’d be able to build that out real quick.

  • Step 1: Create a classification prompt (Prompt Node) to filter out incoming messages
  • Step 2: Create a downstream prompt (Prompt Node) for the LLM to respond to messages that don’t need to be escalated
  • Step 3: Link outputs of the classification prompt to two separate Final Output Nodes
  • Step 4: Set up variables and hit Run!

Conditional Routing Workflow

3. Calling a Prompt for each item in an array

Workflows support looping - you simply need to draw an edge from a later node back to a node that preceeded it. We recommend to always include a Conditional Node when introducing a loop to specify when that loop will exit. Here’s an example of a loop that calls a prompt node for each item in an array, then exits with the output.

Workflow Looping Example

Here are the steps involved in this example:

  • Step 1: We define a input_items JSON input that has our list of items that we want to iterate over.
  • Step 2: We define a Templating Node that outputs a JSON with the current iterator (item) and the current output array (output):
1{% if index == 0 %}
2 {"item": "{{ input_items[index] }}", "output": [] }
3{% elif index < input_items | length %}
4 {"item": "{{ input_items[index] }}", "output": {{ json.dumps(output_items['output'] + [prompt_output]) }} }
5{% else %}
6 {"item": null, "output": {{ json.dumps(output_items['output'] + [prompt_output]) }} }
7{% endif %}
  • Step 3: The templating node above takes as inputs:
    1. The input_items array as a JSON input
    2. The current index by using its own execution counter.
    3. The current output_items, using a fallback mapped to a JSON input variable of an empty array.
    4. The current prompt_output, using a fallback mapped to a Text input variable of an empty string.
  • Step 4: The templating node feeds into a conditional node, that calls a Prompt node when the prompt’s execution counter is less than the length of the array. It outputs to a terminal node in the else branch.
  • Step 5: The Prompt node receives the edge from the conditional node’s IF branch, and feeds back to the templating node with its output.

4. Retrying a Prompt Node in case of non-deterministic failure

Prompt nodes support two selectable outputs - one from the model in case of a valid output and one in case of a non deterministic error. Model hosts fail for all sorts of reasons that include timeouts, rate limits, or server overload. You could make your production-grade LLM features resilient to these features by adding retry logic into your Workflows!

Workflow Retry Example

Here are the steps involved in this example:

  • Step 1: Define a standard Prompt Node
  • Step 2: Define a Conditional Node (Error Check) that reads from the new Error output from the Prompt Node and checks to see if it’s not null.
  • Step 3: Define another Conditional Node (Retry Check) that reads from the Prompt Node’s Execution Counter and checks to see if it’s been invoked more than your desired limit (3).
  • Step 4: Loop back to the Prompt Node if it’s under the limit, exit with some error message if it’s over the limit. In the case that the error is null, exit with the Prompt Node’s response.