Skip to main content
The backend verifier API evaluates a hydrated trace against rubric criteria and returns structured reviews plus aggregate rewards.

Endpoint

Endpoint: POST /api/graphs/verifiers/completions Authentication: Bearer token via Authorization header

Request (rubric evaluation)

{
  "job_id": "zero_shot_verifier_rubric_single",
  "trace": {
    "schema_version": "3.0",
    "event_history": [],
    "markov_blanket_message_history": [],
    "metadata": {}
  },
  "rubric": {
    "event": [
      { "id": "legality", "weight": 1.0, "description": "Actions are legal" }
    ],
    "outcome": [
      { "id": "task_completion", "weight": 1.0, "description": "Task is completed" }
    ]
  },
  "options": {
    "event": true,
    "outcome": true,
    "provider": "openai",
    "model": "gpt-4.1-mini"
  }
}
Notes:
  • trace is a v3 trace. In eval jobs and prompt optimization, the backend can hydrate traces from the interceptor store.
  • rubric is the verifier rubric payload shape (event/outcome criteria lists).

Response

{
  "event_reviews": [],
  "outcome_review": {
    "total": 0.8,
    "reasoning": "..."
  },
  "event_totals": [],
  "normalized_event_rewards": [],
  "metadata": {}
}

Using with eval jobs

If you enable verifier evaluation in an eval job, the backend can:
  • hydrate the trace automatically (from interceptor correlation IDs)
  • evaluate the trace against a rubric advertised by the task app via GET /inforubrics
  • fuse task app reward (outcome_reward) and verifier reward into a single final reward (per your configured weights)