
Parameters
LLM used to evaluate the run.
Optional provider API version to use when calling the model.
The level of reasoning effort to use for the LLM, if supported by the model.
Accepted values are
none, minimal, low, medium, high, xhigh, or default.Additional arguments to pass directly to the underlying model’s API.
Instructions for the evaluator. You can use expressions like
${input} to reference the agent input
and ${@NodeA.output} to reference upstream node outputs.Input passed to the evaluator. Supports the agent input editor. You can use expressions like
${input} to reference the agent input
and ${@NodeA.output} to reference upstream node outputs.Credentials
Select an LLM Credential used by the evaluator model.
Inputs
The input passed to the evaluator. This can be used for evaluation by using an expression like
${input} in the
Evaluator Input parameter.Outputs
Evaluation result with score and feedback.
| Field | Description |
|---|---|
score | Numeric score for the evaluation. |
feedback | Natural language feedback for the evaluation. |

