Language Models

The synth-ai language model system provides a unified interface for working with multiple AI providers including OpenAI, Anthropic, Google Gemini, and others.

Core Classes

LM

The main LM class provides a standardized interface for language model interactions:
from synth_ai.lm.core.main_v3 import LM

# Initialize with any supported model
lm = LM(model_name="gpt-4o-mini", temperature=0.7)

# Generate responses
response = lm.respond("Hello, world!")
print(response.raw_response)
Key Features:
  • Multi-provider support: OpenAI, Anthropic, Google, local models
  • Automatic retries: Built-in error handling and retry logic
  • Caching: Request/response caching for efficiency
  • Structured outputs: Type-safe response parsing
  • Session tracing: Integrated with synth-ai’s tracing system

Generated Documentation

Detailed API documentation is available for:
  • LM Class - Main language model interface
  • BaseTool - Tool interface for function calling
  • VendorBase - Base class for vendor implementations

Vendor Support

The system supports multiple AI providers through a plugin architecture:
  • OpenAI: GPT-4, GPT-3.5, and other OpenAI models
  • Anthropic: Claude models with tool support
  • Google: Gemini Pro and other Google AI models
  • Local: Ollama and other local model servers
  • Third-party: Together AI, Groq, DeepSeek, and others

Configuration

Models can be configured through environment variables or direct instantiation:
# Environment-based configuration
import os
os.environ['OPENAI_API_KEY'] = 'your-key-here'

# Direct configuration
lm = LM(
    model_name="claude-3-sonnet-20240229",
    temperature=0.7,
    max_tokens=1000
)

Structured Outputs

The LM system supports structured outputs using Pydantic models:
from pydantic import BaseModel

class Response(BaseModel):
    answer: str
    confidence: float

response = lm.respond_structured(
    message="What is 2+2?",
    response_model=Response
)
print(response.answer)  # "4"
print(response.confidence)  # 0.95