Prompt Node

A core part of any LLM application. This node represents a call to a Large Language Model (LLM). Similar to Vellum Prompts, you can use models from any of the major providers or open source community, including: OpenAI, Anthropic, Meta, Cohere, Google, Mosaic, and Falcon-40b.

Upon creating a Prompt Node you’ll be asked to import a prompt from an existing Deployment, Sandbox, or create one from scratch. Prompts are defined by their variables, prompt template, model provider, and parameters. Refer to this help center article to learn more about our prompt syntax (Vellum Prompt Template Syntax).

Prompt Node Interface

The Prompt Node provides a simple interface for configuring your LLM call:

Prompt Node

When you open the Prompt Node, you’ll see a detailed configuration interface where you can set up your prompt, select models, and configure parameters:

Prompt Node Configuration

Key Features

  • Variable Integration: Easily incorporate variables from upstream nodes
  • Model Selection: Choose from a wide range of LLM providers and models
  • Parameter Configuration: Fine-tune model behavior with parameters like Temperature, Max Output Tokens, and more
  • Prompt Editing: Create and edit prompts directly within the workflow
  • Function Calling: Support for structured outputs via function calling (with compatible models)