Workflows

Building Common LLM architectures with Vellum Workflows

With a large number of supported node types (full details here: Experimenting with Workflows) and few limits on how they can be connected to each other, the types of architectures/ applications you can create using Workflows is very large.

The list of architectures below is not exhaustive, we’re continuing to build it out. If you come up with an interesting architecture that you think the community might benefit from, please reach out so we can add it to the list here.

1. Create a Retrieval Augmented Generation (RAG) system

LLM applications often require specific context from a Vector DB which is added into the prompt. Forget signing up for multiple systems and being stuck on various micro decisions, with Vellum you can prototype a RAG system in minutes

  • Step 1: Create a Document Index and upload your documents (follow this article for tips: Uploading Documents)
  • Step 2: Add a Search Node in your Workflow
  • Step 3: Add a Prompt Node that takes the results of your Search Node as an input variable
  • Step 4: Link to a Final Output or other downstream node (e.g., if the Prompt Node result is a certain value branch execution based on a Conditional Node)
  • Step 5: Set up variables and hit Run!

2. Route relevant messages dynamically to a Human

If you’re building an agent that answers questions coming from users (e.g., a support chatbot), you may want to set up rules such that anytime the incoming message from a user is sensitive (e.g., the user is angry or in a dangerous situation) then the LLM automatically escalates it to a human. With Workflows you’d be able to build that out real quick.

  • Step 1: Create a classification prompt (Prompt Node) to filter out incoming messages
  • Step 2: Create a downstream prompt (Prompt Node) for the LLM to respond to messages that don’t need to be escalated
  • Step 3: Link outputs of the classification prompt to two separate Final Output Nodes
  • Step 4: Set up variables and hit Run!

3. Calling a Prompt for each item in an array

Workflows support looping - you simply need to draw an edge from a later node back to a node that preceeded it. We recommend to always include a Conditional Node when introducing a loop to specify when that loop will exit. Here’s an example of a loop that calls a prompt node for each item in an array, then exits with all of the outputs.

Here are the steps involved in this example:

  • Step 1: We define a items JSON input that has our list of items that we want to iterate over.
  • Step 2: We define a Templating Node with an empty array of outputs. This is the variable that we will be adding each prompt output to.
  • Step 3: We add a Code Execution Node that will output the current item and counter for each iteration of the loop. The Templating Node Counter extracts the counter.
  • Step 4: We use a Conditional Node to dictate when to exit the loop.
  • Step 5: We feed the "item" key from the iterator into the Prompt Node we care about executing on each iteration.
  • Step 6: We use a Code Execution node to append an output on each iteration.
  • Step 7: Finally, the Templating Node will return the output data returned by the previous step’s Code Execution node.

4. Retrying a Prompt Node in case of non-deterministic failure

Prompt nodes support two selectable outputs - one from the model in case of a valid output and one in case of a non deterministic error. Model hosts fail for all sorts of reasons that include timeouts, rate limits, or server overload. You could make your production-grade LLM features resilient to these features by adding retry logic into your Workflows!

Here are the steps involved in this example:

  • Step 1: Define a standard Prompt Node
  • Step 2: Define a Conditional Node (Error Check) that reads from the new Error output from the Prompt Node and checks to see if it’s not null.
  • Step 3: Define another Conditional Node (Count Check) that reads from the Prompt Node’s Execution Counter and checks to see if it’s been invoked more than your desired limit (3).
  • Step 4: Loop back to the Prompt Node if it’s under the limit, exit with some error message if it’s over the limit. In the case that the error is null, exit with the Prompt Node’s response.

5. Summarizing the contents of a PDF file

Vellum Document Indexes are typically used to power RAG systems via Search Nodes. However, they can also be used to operate on the entirety of a single file’s contents. In this example, we make use of Vellum Document Indexes not for the purpose of search, instead, to leverage the OCR that’s performed and operate on the raw text that’s extracted from a PDF file.

Prerequisites: You need to have …

  • Created a Document Index. Note: it doesn’t matter what embedding model or chunking strategy you choose, since we’re only leveraging the OCR capabilities of the Document Index.
  • Uploaded a PDF file to the Document Index and noted down its ID.
  • Generated a Vellum API Token and saved its value as a Workspace Secret.

Here are the steps involved in this example:

  • Step 1: The input to the workflow is the ID of a Document that was previously uploaded to a Document INdex
  • Step 2: Use a Templating Node (Document API URL) to construct the url of a Vellum API we want to hit.
  • Step 3: Use an API Node (Document API) that hits the Vellum API to retrieve metadata about the Document.
  • Step 4: Use a Templating Node (Processed Document URL) to extract the url of the processed document from the API response.
  • Step 5: Use an API Node (Processed Document Contents) to retrieve the text contents of the Document.
  • Step 6: Pass those contents to a Prompt Node that summarizes the text.