Skip to main content
Task apps are the relay between your environment and the Synth AI training infrastructure. They accept rollout requests, run your logic, and return rewards.

Required endpoints

Task apps must implement:
  • GET /health – liveness
  • GET /task_info – dataset metadata and schema
  • POST /rollout – run one seed and return reward

Multiple tasks are required

For eval/GEPA/MIPRO, task apps must be configured with multiple tasks and support deterministic selection via seed. This is how the optimizer evaluates many examples and compares candidates.

Where task apps run

You can run task apps locally behind a tunnel, or deploy them to Synth Cloud. See SynthTunnel, Cloudflare, and Deploy to Synth for deployment details.

Task info payload (what the optimizer needs)

/task_info tells GEPA/eval what data exists and how to map seeds to examples. At minimum, include:
  • dataset identifiers and splits
  • input/output schema
  • task metadata needed to interpret examples
  • task list or dataset registry details
Minimal example:
{
  "id": "banking77",
  "splits": ["train", "validation"],
  "input_schema": {"text": "string"},
  "output_schema": {"label": "string"}
}

Rollout request (what you receive)

Each rollout targets a single seed and includes the inference URL:
{
  "seed": 42,
  "inputs": { "text": "..." },
  "policy_config": { "inference_url": "...", "model": "..." }
}
Your task app should:
  1. map seed to a deterministic example
  2. run the prompt via policy_config.inference_url
  3. compute the reward

Rollout response (what you return)

Return a reward under metrics.mean_return:
{
  "metrics": { "mean_return": 0.84 },
  "info": { "tests_passed": 12 },
  "artifacts": { "stdout": "...", "diff": "..." }
}

Auth and inference routing

Task apps sit between Synth and your environment, so auth and routing must be consistent across all rollouts.

Task app auth (inbound)

  • Task app endpoints require X-API-Key with the ENVIRONMENT_API_KEY
  • This key is injected by Synth for hosted apps, or set locally for dev
  • Do not accept raw provider keys from clients

LLM routing (outbound)

  • Always send LLM calls to policy_config.inference_url (the interceptor URL)
  • The interceptor injects provider auth and captures traces for GEPA/eval
  • Preserve query params on inference_url when appending paths
  • Do not append an extra /v1 if the URL already includes it
Example URL handling:
# inference_url already includes /v1 and query params
base = policy_config["inference_url"]
# Append path without duplicating /v1
url = f"{base}/chat/completions" if not base.endswith("/chat/completions") else base

What not to do

  • Do not call OpenAI/Anthropic directly from task apps
  • Do not embed provider API keys in task app code or env
  • Do not strip query params from inference_url