Building Common LLM architectures with Vellum Workflows
With a large number of supported node types (full details here: Experimenting with Workflows) and few limits on how they can be connected to each other, the types of architectures/ applications you can create using Workflows is very large.
The list of architectures below is not exhaustive, we’re continuing to build it out. If you come up with an interesting architecture that you think the community might benefit from, please reach out so we can add it to the list here.
Create a Retrieval Augmented Generation (RAG) system
LLM applications often require specific context from a Vector DB which is added into the prompt. Forget signing up for multiple systems and being stuck on various micro decisions, with Vellum you can prototype a RAG system in minutes
Walkthrough
Workflow
Route messages to a Human
If you’re building an agent that answers questions coming from users (e.g., a support chatbot), you may want to set up rules such that anytime the incoming message from a user is sensitive (e.g., the user is angry or in a dangerous situation) then the LLM automatically escalates it to a human. With Workflows you’d be able to build that out real quick.
Walkthrough
Add a downstream prompt
Use another prompt node for the LLM to respond to messages that don’t need to be escalated
Workflow
Retrying a Prompt Node in case of non-deterministic failure
Prompt nodes support two selectable outputs - one from the model in case of a valid output and one in case of a non deterministic error. Model hosts fail for all sorts of reasons that include timeouts, rate limits, or server overload. You could make your production-grade LLM features resilient to these features by adding retry logic into your Workflows!
Walkthrough
Add a Conditional Node (Error Check)
This node will read from the new Error output from the Prompt Node and check to see if it’s not null.
Workflow
Summarizing the contents of a PDF file
Vellum Document Indexes are typically used to power RAG systems via Search Nodes. However, they can also be used to operate on the entirety of a single file’s contents. In this example, we make use of Vellum Document Indexes not for the purpose of search, instead, to leverage the OCR that’s performed and operate on the raw text that’s extracted from a PDF file.
Prerequisites: You need to have …
- Created a Document Index. Note: it doesn’t matter what embedding model or chunking strategy you choose, since we’re only leveraging the OCR capabilities of the Document Index.
- Uploaded a PDF file to the Document Index and noted down its ID.
- Generated a Vellum API Token and saved its value as a Workspace Secret.
Walkthrough
Set the input to the workflow
This will be the ID of a Document that was previously uploaded to a Document Index
Add an API Node (Document API)
This will ping the Vellum API and retrieve metadata about the Document.
Workflow