Prompt nodes support two selectable outputs - one from the model in case of a valid output and one in case of a non deterministic error. Model hosts fail for all sorts of reasons that include timeouts, rate limits, or server overload. You could make your production-grade LLM features resilient to these features by adding retry logic into your Workflows!
Loop back to the Prompt Node if it’s under the limit, or exit with some error message if it’s over the limit. In the case that the error is null, exit with the Prompt Node’s response.
Retry logic for handling non-deterministic failures