Skip to main content
The Python SDK exposes graph jobs (Graphs API) via the GraphOptimizationJob class. It wraps:
  • creating a training job from a dataset,
  • monitoring progress,
  • downloading the best prompt snapshot,
  • and running inference against the trained graph.

Create from a dataset

Datasets are only supported for verifier graphs right now. Policy graph training via datasets is not yet supported.
from synth_ai.sdk import GraphOptimizationJob

job = GraphOptimizationJob.from_dataset(
    "my_tasks.json",
    policy_model="gpt-4o-mini",
    rollout_budget=200,
    proposer_effort="medium",
)
submit_result = job.submit()
print(submit_result.graph_gen_job_id)
from_dataset accepts:
  • a file path,
  • a raw dict,
  • or a GraphEvolveTaskSet object.

Monitor training

status = job.get_status()
print(status["status"], status.get("best_score"))
For live progress, use the streaming helper:
final_status = job.stream_until_complete()

Download the best prompt

prompt = job.download_prompt()
print(prompt)

Run inference

result = job.run_inference({"query": "Upgrade my plan"})
print(result["output"])
Optional inference args:
  • model: override the policy model for this call.
  • graph_snapshot_id: run a specific snapshot instead of the best.

Run verifier (Verifier Graphs)

For graphs trained with graph_type="verifier", use run_verifier to evaluate execution traces:
# Pass a V3 trace dict or SessionTraceInput object
verification = job.run_verifier(session_trace)
print(f"Score: {verification.score}")
print(f"Reasoning: {verification.reasoning}")