Usage
- Task app is reachable and responding
- Rollout endpoints return valid data
- Inference URL routing works correctly
- Trace correlation IDs are properly propagated
Using with RL Configs (Recommended)
You can add a[smoke] section to your RL config TOML file to configure smoke testing. This enables auto-start features and sets test defaults, making smoke tests as simple as:
[smoke] section is only used by the smoke CLI command and is completely ignored by the RL trainer. It will not affect your training jobs.
Auto-Start Features
The[smoke] section can automatically start required services in the background:
- ✅ Task App Server: Auto-starts your task app with proper port management
- ✅ sqld Server: Auto-starts sqld for tracing with configurable ports
- ✅ Automatic Cleanup: Stops all background services when smoke test completes
Full Configuration Example
Quick Start Examples
1. Full Auto (Zero Setup)[smoke] section.
[smoke] Section Reference
All fields in the[smoke] section are optional. CLI arguments override TOML values.
Auto-Start: Task App
Configure automatic task app startup:| Field | Type | Default | Description |
|---|---|---|---|
task_app_name | str | None | Task app name (from synth-ai task-app list) |
task_app_port | int | 8765 | Port to run task app on |
task_app_env_file | str | None | Path to .env file (e.g., “.env”) |
task_app_force | bool | false | Kill existing process on port before starting |
Auto-Start: sqld (Tracing)
Configure automatic sqld server startup:| Field | Type | Default | Description |
|---|---|---|---|
sqld_auto_start | bool | false | Auto-start sqld server |
sqld_db_path | str | ”./traces/local.db” | Database file path |
sqld_hrana_port | int | 8080 | Hrana WebSocket port |
sqld_http_port | int | 8081 | HTTP API port |
Test Parameters
Configure smoke test behavior:| Field | Type | Default | Description |
|---|---|---|---|
task_url | str | None | Task app URL (overridden by auto-start) |
env_name | str | None | Environment name |
policy_name | str | ”react” | Policy name |
max_steps | int | 3 | Number of agent/env step pairs |
policy | str | None | Inference preset: mock, gpt-5-nano, openai, groq |
model | str | ”gpt-5-nano” | Model ID for inference |
mock_backend | str | ”synthetic” | Mock backend: synthetic or openai |
mock_port | int | 0 | Mock server port (0 = auto) |
return_trace | bool | false | Request v3 trace in response |
use_mock | bool | true | Use local mock inference server |
CLI Options
Core Options
--url URL- Task app base URL (default:$TASK_APP_URLorhttp://localhost:8765)--api-key KEY- Environment API key for X-API-Key header (default:$ENVIRONMENT_API_KEY)--env-name NAME- Environment name to roll out (auto-detected if possible)--policy-name NAME- Policy name to pass to task app (default:react)--model MODEL- Model ID to route in inference payload (default:gpt-5-nano)--max-steps N- Number of agent/env step pairs (default:3)
Inference Options
--policy PRESET- Inference route preset:mock,gpt-5-nano,openai, orgroq--inference-url URL- Override inference URL (default: Synth API chat completions)
Mock Server Options
--use-mock/--no-mock- Use local mock inference server (default:true)--mock-backend BACKEND- Mock backend:synthetic(deterministic) oropenai(passthrough) (default:synthetic)--mock-port PORT- Port for local mock inference server, 0 = auto-assign (default:0)
Rollout Options
--rollouts N- Number of rollouts, using seeds 0..N-1 (default:1)--batch-size N- Alias for--rollouts--group-size N- Completions per seed to emulate GRPO grouping (default:1)--parallel N- Run this many rollouts concurrently to emulate a train step (0 = sequential, default:0)
Configuration Options
--config PATH- RL TOML config to derive URL/env/model and load[smoke]section--env-file PATH- Path to .env file to load before running--return-trace- Request v3 trace in response if supported
Examples
Basic smoke test
Using a config file
Test with multiple rollouts in parallel
Use real OpenAI instead of mock
Override config settings via CLI
Smoke Section Reference
All fields in the[smoke] section are optional. CLI arguments take precedence over TOML values.
| Field | Type | Description | Default |
|---|---|---|---|
task_url | string | Task app base URL | $TASK_APP_URL or http://localhost:8765 |
env_name | string | Environment name | Auto-detected |
policy_name | string | Policy name | react |
max_steps | int | Number of steps | 3 |
policy | string | Inference preset: mock, gpt-5-nano, openai, groq | None |
model | string | Model ID | gpt-5-nano |
mock_backend | string | Mock backend: synthetic or openai | synthetic |
mock_port | int | Mock server port (0 = auto) | 0 |
return_trace | bool | Request v3 trace | false |
use_mock | bool | Use local mock server | true |
Notes
- The smoke command automatically starts a local libSQL/sqld instance for tracing if needed
- When
use_mock=true, a local FastAPI mock server emulates GPT-5-Nano - In
syntheticmode, the mock returns deterministic tool calls for reproducible testing - In
openaimode, the mock proxies requests to OpenAI (requires$OPENAI_API_KEY) - The
[smoke]section is validated by Pydantic but has no effect on training jobs