LLM applications often require specific context from a Vector DB which is added into the prompt. Forget signing up for multiple systems and being stuck on various micro decisions, with Vellum you can prototype a RAG system in minutes.
1Create a Document Index and upload your documents
Follow this article for tips: Uploading Documents)
2Add a Search Node in your Workflow
Place this anywhere and connect it to the “entrypoint”
3Add a Prompt Node
The prompt node should take the results of your Search Node as an input variable
4Link to a Final Output or other downstream node
For example, if the Prompt Node result is a certain value branch execution based on a Conditional Node)