Skip to main content
Task Apps don’t need to be in Python. You can implement them in any language that can serve HTTP requests and make LLM calls.

Why Non-SDK Local API?

  • Use your preferred language - No need to rewrite existing code in Python
  • Better performance - Compiled languages can be faster for CPU-intensive tasks
  • Smaller deployments - Single binaries with no runtime dependencies
  • Existing codebases - Integrate directly with your current infrastructure
  • No Python required - Start optimization jobs via API calls

How It Works

┌─────────────────┐         ┌──────────────────┐
│  GEPA           │  HTTP   │  Your Task App   │
│  Optimizer      │ ──────> │  (any language)  │
│                 │         │                  │
│  Proposes new   │         │  Evaluates the   │
│  prompts        │ <────── │  prompt, returns │
│                 │  reward │  reward          │
└─────────────────┘         └──────────────────┘
The optimizer calls your /rollout endpoint with candidate prompts, and you return a reward indicating how well each prompt performed.

Managed Deploy (Cloud)

You can deploy a polyglot task app by packaging it into a Dockerfile + context directory and using the managed deploy API. The backend builds and hosts it, then returns a proxy task_app_url you can use for eval/optimization jobs. See LocalAPI Overview for examples.

The Contract

All Task Apps implement the same OpenAPI contract, regardless of language: Required Endpoints:
  • GET /health - Health check (unauthenticated OK)
  • POST /rollout - Evaluate a prompt (authenticated)
Optional Endpoints:
  • GET /task_info - Dataset metadata (authenticated)
Key Request Fields:
  • env.seed - Dataset index
  • policy.config.inference_url - LLM endpoint
  • policy.config.prompt_template - The prompt to evaluate
Key Response Fields:
  • metrics.reward_mean - Reward (0.0-1.0) that drives optimization
  • trajectories[].steps[].reward - Per-step reward
See the full OpenAPI specification for complete details.

Accessing the Contract

Via CLI

# View the contract
synth contracts show task-app

# Get the file path for code generators
synth contracts path task-app

Direct Download

curl -O https://raw.githubusercontent.com/synth-laboratories/synth-ai/main/synth_ai/contracts/task_app.yaml

Generate Types

# Rust
openapi-generator generate -i task_app.yaml -g rust -o ./types

# Go
openapi-generator generate -i task_app.yaml -g go -o ./types

# TypeScript
openapi-generator generate -i task_app.yaml -g typescript-axios -o ./types

Authentication

Task Apps involve two separate authentication flows:

1. Task App Authentication (X-API-Key)

Requests to your task app from the optimizer include an X-API-Key header:
export ENVIRONMENT_API_KEY=your-secret-key
Your task app should verify X-API-Key matches ENVIRONMENT_API_KEY.

2. LLM API Authentication (Authorization: Bearer)

When your task app makes requests to OpenAI/Groq/etc:
export OPENAI_API_KEY=sk-...    # or
export GROQ_API_KEY=gsk_...
Important: The X-API-Key header from the optimizer is for task app auth only - do NOT forward it to the LLM API.

Running Optimization (No Python Required)

Start optimization jobs using a Python wrapper script or via the dashboard:
# 1. Start your task app
ENVIRONMENT_API_KEY=my-secret ./synth-task-app

# 2. Expose via tunnel
cloudflared tunnel --url http://localhost:8001
Then submit jobs using a Python wrapper script:
# submit_job.py - Wrapper script for polyglot users
import os
from synth_ai.sdk import PolicyOptimizationJob

async def submit_job(task_app_url: str, task_app_api_key: str):
    client = PromptLearningClient(api_key=os.environ["SYNTH_API_KEY"])
    job = await client.create_job(config={
        "algorithm": "gepa",
        "task_app_url": task_app_url,
        "task_app_api_key": task_app_api_key,
    })
    await client.start_job(job["id"])
    return job["id"]
Run the wrapper: python submit_job.py

Language Implementations

Performance Comparison

LanguageBinary SizeDependenciesStartup TimeCross-Compile
Rust~5-10MBSomeFast (~50ms)Yes (via rustup)
Go~8-12MBNoneVery Fast (~10ms)Yes (built-in)
TypeScriptN/A (Node)ManyMedium (~200ms)N/A
Zig~1-5MBNoneVery Fast (~10ms)Yes (trivial)

Debugging Tips

Testing Locally

# Health check
curl http://localhost:8001/health

# Manual rollout
curl -X POST http://localhost:8001/rollout \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-secret" \
  -d '{
    "run_id": "test-1",
    "env": {"seed": 0},
    "policy": {
      "config": {
        "model": "gpt-4o-mini",
        "inference_url": "https://api.openai.com/v1"
      }
    },
    "mode": "eval"
  }'

Common Issues

  1. 404 errors from LLM endpoint: Check URL construction with query parameters
  2. Authentication failures: Verify X-API-Key matches ENVIRONMENT_API_KEY
  3. Missing rewards: Ensure reward field is present in each step
  4. Tool call parsing: Extract predictions from tool_calls or content correctly