Prompt Retry Logic

Prompt nodes support two selectable outputs - one from the model in case of a valid output and one in case of a non deterministic error. Model hosts fail for all sorts of reasons that include timeouts, rate limits, or server overload. You could make your production-grade LLM features resilient to these features by adding retry logic into your Workflows!

Implementation Steps

1

Add a standard Prompt Node

2

Add a Conditional Node (Error Check)

This node will read from the new Error output from the Prompt Node and check to see if it’s not null.

3

Define another Conditional Node (Count Check)

This node will read from the Prompt Node’s Execution Counter, and check if it’s been invoked more than your desired limit (3).

4

Loop back to the Prompt Node

Loop back to the Prompt Node if it’s under the limit, or exit with some error message if it’s over the limit. In the case that the error is null, exit with the Prompt Node’s response.

Retry logic for handling non-deterministic failures