Skip to main content
Environment Node Node inspector for an Annotation Evaluator node This node is best used in conjunction with a Benchmark Trigger node to supply the reference trajectory and feedback and a Post Run node to supply the current trajectory. Make sure to use a trajectory annotation benchmark when configuring the Benchmark Trigger. Typically, this node is connected to the output of a Post Run node to evaluate the just-completed run.

Parameters

Model Name
string
default:"openai/gpt-5-mini"
required
LLM used to evaluate the run.
Reference Feedback
string
required
special-field:node_id
Feedback for the reference agent run. You can simply select the Benchmark Trigger node that supplies this feedback.
Reference Trajectory
json
required
special-field:node_id
Trajectory of a reference agent run. You can simply select the Benchmark Trigger node that supplies this trajectory.
Current Trajectory
json
required
Trajectory of the current run to evaluate. If you have connected this node to a Post Run node, you can set this parameter to the expression {input} to use the output of that node. Otherwise, you can set this to an expression referring to a Post Run. For example, {@Post_Run.output}.

Credentials

LLM Credential
LLM Credential
required
Select an LLM Credential used by the evaluator model.

Inputs

Input
any
The input passed to the evaluator. Typically, this is the current trajectory from a Post Run node.

Outputs

Output
json
Evaluation result with score and feedback.
FieldDescription
scoreNumeric score for the evaluation.
feedbackNatural language feedback for the evaluation.