Synth AI Graphs
Graphs are Synth AI’s abstraction for multi-node LLM graphs. Like task apps, graphs are first-class artifacts you can train, download, and serve in production.What is a Graph?
A Synth AI graph is a directed graph of LLM calls and transformations. Each node can:- Call an LLM with a specific prompt template
- Transform data between nodes
- Branch conditionally based on intermediate results
- Aggregate outputs from multiple paths
Zero-Shot vs Optimized Graphs
Synth AI supports two ways to run graphs:- Zero-Shot Graphs: Built-in, expert-designed architectures (Single, MapReduce, RLM) that you can use immediately with your own prompts or rubrics.
- Optimized Graphs: Custom graphs trained via Graph Evolve that have evolved their internal structure and prompts based on your specific dataset.
Graph Modes
Regardless of whether a graph is Zero-Shot or Optimized, it typically operates in one of two modes:| Mode | Input | Output | Purpose |
|---|---|---|---|
| Policy | input (JSON) | output (JSON/Text) | General inference, QA, and reasoning. |
| Verifier | trace (V3) | score, reasoning | Verifying quality and assigning rewards. |
The RLM Architecture
RLM (Recursive Language Model) is Synth AI’s most powerful architecture, designed for complex reasoning loops and massive context windows (1M+ tokens). Unlike standard linear chains, RLM can:- Recursively decompose complex problems into smaller sub-tasks.
- Audit and refine its own intermediate reasoning steps.
- Scale with context, making it the primary choice for evaluating long agent trajectories or large documents.
Graph Shapes & Scalability
Synth AI graphs support three distinct architectures (shapes) to handle different scales of input:| Shape | Purpose | Context Limit |
|---|---|---|
single | Simple one-pass reasoning | 128k tokens |
mapreduce | Summarization/Evaluation of long logs | 500k tokens |
rlm | High-complexity tool-use and long-context reasoning | 1M+ tokens |
Verifier Graphs
Verifier graphs evaluate execution traces and produce structured scores. At inference time, they take:- A V3 trace - The execution trace from
synth-aitracing - A rubric - Evaluation criteria defining what to score (optional if built into graph prompts)
- Matches human evaluation quality
- Runs on cheaper models (GPT-4o-mini, Groq)
- Provides consistent, calibrated scores
- Returns structured rewards (event-level and outcome-level)
Using Verifiers in GEPA
Once you have a registered Verifier Graph, you can use it to score trajectories during GEPA prompt optimization by setting thesynth_verifier_id in your config:
Creating Graphs
Graphs are created through optimization. You provide:- A dataset - Examples of inputs and expected outputs
- Configuration - Graph type, structure constraints, models to use
- A budget - How much optimization to run
Using Graph Evolve (Recommended)
The simplest way to create graphs is through the Graph Evolve API:Related
- Graphs - High-level product API
- Graph Inference - Production serving
- Zero-Shot Verifiers - Built-in verifiers
- RLM - Recursive reasoning