synth_ai.sdk.graphs.completions
Alpha
Graph completions client for graph inference (policies, verifiers, RLM).
This module provides the client for running inference on trained graphs,
including policy graphs, verifier graphs, and Reasoning Language Models (RLM).
Classes
GraphTarget
GraphInfo
Metadata for a registered graph.
ListGraphsResponse
Response from list_graphs.
GraphCompletionsClient
Client for /api/graphs/completions with flexible graph targeting.
Methods:
list_graphs
kind: Optional filter by graph kind (“policy”, “verifier”, “judge”)limit: Maximum number of graphs to return (default: 50)
- ListGraphsResponse with graphs list and total count
run
run_output
complete
graph_id: Built-in graph name, GraphGen job_id, or snapshot UUIDinput_data: Graph-specific input datamodel: Optional model override
- Graph output dictionary
verify_with_rubric
session_trace: V3 trace formatrubric: Rubric with event/outcome criteriasystem_prompt: Optional custom system promptuser_prompt: Optional custom user promptverifier_type: “single”, “mapreduce”, or “rlm” (auto-detects if None)options: Optional execution options (event, outcome, etc.)model: Optional model override
- Verification result with event_reviews, outcome_review, etc.
verify_fewshot
session_trace: V3 trace format (validated using SessionTraceInput)calibration_examples: List of calibration examples with:- session_trace: V3 trace format
- event_rewards: List[float] (0.0-1.0), one per event
- outcome_reward: float (0.0-1.0)
expected_score: Optional expected score for the trace being evaluatedexpected_rubric: Optional rubric/ground truth for the trace being evaluatedsystem_prompt: Optional custom system promptuser_prompt: Optional custom user promptverifier_type: “single”, “mapreduce”, or “rlm” (auto-detects if None)options: Optional execution optionsmodel: Optional model override
- Verification result with event_reviews, outcome_review, etc.
ValueError: If calibration_examples are invalid (validated client-side)
verify_contrastive
session_trace: V3 trace format (the trace being evaluated)gold_examples: List of gold examples with:- summary: str (required, non-empty)
- gold_score: float (0.0-1.0, required)
- gold_reasoning: str (required, non-empty)
candidate_score: Verifier’s predicted score for this trace (0.0-1.0, what we’re evaluating)candidate_reasoning: Verifier’s reasoning for this score (what we’re evaluating)expected_rubric: Optional rubric/ground truth for this tracesystem_prompt: Optional custom system promptuser_prompt: Optional custom user promptverifier_type: “single”, “mapreduce”, or “rlm” (auto-detects if None)options: Optional execution optionsmodel: Optional model override
- Verification result with event_reviews, outcome_review, etc.
ValueError: If gold_examples or candidate_score/reasoning are invalid (validated client-side)
verify_with_prompts
session_trace: V3 trace formatsystem_prompt: Custom system prompt (required)user_prompt: Custom user prompt (required)verifier_type: “single”, “mapreduce”, or “rlm” (auto-detects if None)options: Optional execution optionsmodel: Optional model override
- Verification result
rlm_inference
query: The query/question to answercontext: Large context (can be string or dict, 1M+ tokens)system_prompt: Optional custom system promptuser_prompt: Optional custom user promptmodel: Model to use (must be RLM-capable, default: gpt-4o-mini)provider: Provider name (default: openai)options: Optional execution options (max_iterations, max_cost_usd, etc.)
- RLM inference result with output, usage, metadata