Skip to main content
GEPA (Genetic Evolution of Prompt Architectures) optimizes prompts through evolutionary search with LLM-guided mutations.

1. Install the SDK

pip install synth-ai

2. Setup Credentials

synth-ai setup
Fetches SYNTH_API_KEY and ENVIRONMENT_API_KEY. Saves to .env. Add your inference provider key:
echo "GROQ_API_KEY=gsk_..." >> .env

3. Run GEPA

uv run main.py
Starts task app, opens tunnel, submits job, streams progress. ~5-15 min.

Minimal Config

[prompt_learning]
algorithm = "gepa"
task_app_url = "http://127.0.0.1:8114"  # Auto-set by runner
task_app_id = "banking77"

[prompt_learning.initial_prompt]
id = "banking77"
name = "Banking77 Classification"

[[prompt_learning.initial_prompt.messages]]
role = "system"
pattern = "Classify the banking intent. {instructions}"
order = 0

[prompt_learning.gepa]
env_name = "banking77"

[prompt_learning.gepa.rollout]
budget = 100          # Total evaluations
max_concurrent = 20

[prompt_learning.gepa.evaluation]
seeds = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
validation_seeds = [10, 11, 12, 13, 14]

[prompt_learning.gepa.population]
initial_size = 10
num_generations = 5
children_per_generation = 3

Key Parameters

ParameterPurpose
rollout.budgetTotal prompt evaluations (cost control)
population.num_generationsEvolution iterations
evaluation.seedsTraining dataset indices
evaluation.validation_seedsHeld-out validation indices

Get Results

import os
from synth_ai.sdk import PolicyOptimizationJob

def run_and_get_results():
    # Create job from TOML config
    job = PolicyOptimizationJob.from_config(
        "train_cfg.toml",
        api_key=os.environ["SYNTH_API_KEY"],
        algorithm="gepa"
    )

    # Submit and wait for completion
    job.submit()
    result = job.poll_until_complete()

    print(f"Best Score: {result.best_score}")
    best_prompt = job.get_best_prompt_text(rank=1)
    print(best_prompt)