Skip to main content
POST
/
api
/
v1
/
online
/
sessions
from synth_ai.sdk.optimization import OfflineJob

job = await OfflineJob.create(
    algorithm="gepa",
    container_url="https://tunnel.example.com",
    policy={"model": "gpt-4o-mini", "provider": "openai"},
    gepa={"population": {"num_generations": 5}}
)

# Stream events
async for event in job.stream():
    print(event)
{
  "session_id": "sess_abc123",
  "status": "running"
}
Create a new policy optimization job using GEPA or MIPRO.
If you’re using Python, the SDK handles config validation and streaming automatically. See the example on the right.
policy_optimization
object
required
Policy optimization configuration.
from synth_ai.sdk.optimization import OfflineJob

job = await OfflineJob.create(
    algorithm="gepa",
    container_url="https://tunnel.example.com",
    policy={"model": "gpt-4o-mini", "provider": "openai"},
    gepa={"population": {"num_generations": 5}}
)

# Stream events
async for event in job.stream():
    print(event)
{
  "session_id": "sess_abc123",
  "status": "running"
}

Example: GEPA Job

Request
{
  "policy_optimization": {
    "algorithm": "gepa",
    "container_url": "https://tunnel.example.com",
    "policy": {
      "model": "gpt-4o-mini",
      "provider": "openai"
    },
    "gepa": {
      "population_size": 8,
      "generations": 5,
      "evaluation": {
        "seeds": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
      }
    }
  }
}

Example: MIPRO Job

Request
{
  "policy_optimization": {
    "algorithm": "mipro",
    "container_url": "https://tunnel.example.com",
    "policy": {
      "model": "gpt-4o-mini",
      "provider": "openai"
    },
    "mipro": {
      "num_candidates": 10,
      "evaluation": {
        "seeds": [0, 1, 2, 3, 4]
      }
    }
  }
}

Get Job Status

GET /api/v1/online/sessions/{session_id}

Returns current job status and progress.
Response
{
  "session_id": "sess_abc123",
  "status": "running",
  "progress": {
    "generation": 3,
    "total_generations": 5,
    "best_reward": 0.87
  }
}
StatusDescription
pendingJob queued, not yet started
runningJob is executing
pausedJob paused at checkpoint
completedJob finished successfully
failedJob encountered an error
cancelledJob was cancelled

Pause Session

PATCH /api/v1/online/sessions/{session_id}

Pause a running session.
curl -X PATCH "https://api.usesynth.ai/api/v1/online/sessions/{session_id}" \
  -H "Authorization: Bearer $SYNTH_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"state": "paused"}'
Response
{
  "status": "paused",
  "session_id": "sess_abc123"
}

Resume Session

PATCH /api/v1/online/sessions/{session_id}

Resume a paused session.
curl -X PATCH "https://api.usesynth.ai/api/v1/online/sessions/{session_id}" \
  -H "Authorization: Bearer $SYNTH_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"state": "running"}'
Response
{
  "status": "running",
  "session_id": "sess_abc123"
}

Stream Job Events

GET /api/v1/online/sessions/{session_id}/events

Server-sent events (SSE) stream of job progress.
curl -N -H "Authorization: Bearer $SYNTH_API_KEY" \
  "https://api.usesynth.ai/api/v1/online/sessions/{session_id}/events"
Events include:
  • generation_complete — A generation finished
  • evaluation_complete — Evaluation of a prompt completed
  • job_complete — Job finished with best prompt
  • error — An error occurred

Get Job Results

GET /api/v1/online/sessions/{session_id}/results

Returns the best prompt and optimization history.
Response
{
  "session_id": "sess_abc123",
  "status": "completed",
  "best_prompt": {
    "messages": [
      {"role": "system", "content": "You are a helpful assistant..."}
    ],
    "reward": 0.94
  },
  "history": [
    {"generation": 1, "best_reward": 0.72},
    {"generation": 2, "best_reward": 0.85},
    {"generation": 3, "best_reward": 0.94}
  ]
}