Tracing Decorators

Tracing decorators provide a simple way to automatically trace function calls and add telemetry to your AI applications without modifying the core logic.

Overview

The decorator system offers:
  • Automatic Instrumentation: Trace function entry/exit
  • Parameter Capture: Log function arguments and return values
  • Error Tracking: Capture exceptions and stack traces
  • Performance Monitoring: Measure execution time
  • Session Integration: Works with SessionTracer

Generated Documentation

Detailed API documentation is available for:

Core Decorators

@with_session

Ensures a tracing session is active:
from synth_ai.tracing_v3.decorators import with_session

@with_session(require=True)
def my_ai_function():
    # This function requires an active session
    # Will raise error if no session is active
    pass

@with_session(require=False)  
def optional_tracing_function():
    # This function works with or without a session
    # Tracing is optional
    pass

@trace_llm_call

Automatically trace LM calls:
from synth_ai.tracing_v3.decorators import trace_llm_call

@trace_llm_call
def call_language_model(prompt: str) -> str:
    # Function logic here
    return lm.respond(prompt)

# Usage - automatically traced when session is active
result = call_language_model("Hello!")

@trace_method

For general method tracing:
from synth_ai.tracing_v3.decorators import trace_method

@trace_method(capture_args=True)
def process_user_input(user_message: str) -> str:
    # Function logic here
    return "processed: " + user_message

# Usage
result = process_user_input("Hello!")

Integration Example

Complete example with SessionTracer:
from synth_ai.tracing_v3.session_tracer import SessionTracer
from synth_ai.tracing_v3.decorators import with_session, trace_llm_call
from synth_ai.lm.core.main_v3 import LM

# Initialize components
tracer = SessionTracer()
lm = LM(model_name="gpt-4o-mini", session_tracer=tracer)

@with_session(require=True)
@trace_llm_call
def chat_with_ai(user_message: str) -> str:
    """Chat function with automatic tracing."""
    response = lm.respond(user_message)
    return response.raw_response

# Usage
tracer.start_session("chat-demo")
try:
    result = chat_with_ai("What's the weather like?")
    print(result)
finally:
    tracer.end_session()

Performance Considerations

  • Minimal Overhead: Decorators add minimal performance impact
  • Conditional Tracing: Only active when session is running
  • Memory Efficient: Streaming trace data to storage