@observe Decorator
The @observe decorator is the simplest way to instrument your agent. Add it to any function — sync or async — and every call is automatically traced with input, output, timing, and errors.
Basic Usage
from twosignal import TwoSignal, observe
ts = TwoSignal()
@observe
def my_agent(query: str) -> str:
context = retrieve_docs(query)
response = call_llm(query, context)
return responseHow It Works
- If no active trace exists,
@observecreates a new trace - If a trace already exists (e.g., from a parent
@observecall), it creates a child span - Function arguments are captured as span input
- The return value is captured as span output
- Start/end time and duration are recorded automatically
- If the function raises an exception, the error is recorded and the exception is re-raised
Options
| Parameter | Type | Default | Description |
|---|---|---|---|
name | str | Function name | Custom span name |
span_type | SpanType | CUSTOM | Span type (AGENT, LLM, TOOL, etc.) |
metadata | dict | None | Static metadata attached to every span |
tags | list | None | Tags for the trace (only applies to root spans) |
Custom Name
By default, the span name is the function's qualified name. Override it:
@observe(name="customer-support-v2")
def handle_query(query: str) -> str:
...Custom Span Type
from twosignal.types import SpanType
@observe(span_type=SpanType.AGENT)
def my_agent(query):
...
@observe(span_type=SpanType.TOOL)
def search_database(query):
...
@observe(span_type=SpanType.RETRIEVAL)
def fetch_context(query):
...Metadata and Tags
@observe(
name="support-agent",
span_type=SpanType.AGENT,
metadata={"version": "2.1", "team": "support"},
tags=["production", "v2"],
)
def support_agent(query: str) -> str:
...Metadata is attached to the span. Tags are attached to the trace (root span only) and can be used for filtering in the dashboard.
Nested Traces
Decorated functions nest automatically. The outer function creates the trace; inner functions create child spans:
@observe(span_type=SpanType.AGENT)
def agent(query):
docs = search(query)
return generate(query, docs)
@observe(span_type=SpanType.RETRIEVAL)
def search(query):
return vector_db.query(query)
@observe(span_type=SpanType.LLM)
def generate(query, context):
return llm.chat(query, context)This produces a trace tree:
agent (AGENT)
├── search (RETRIEVAL)
└── generate (LLM)Async Support
@observe works with async functions out of the box:
@observe
async def async_agent(query: str) -> str:
context = await retrieve_docs(query)
response = await call_llm(query, context)
return responseCombining with Wrappers
LLM wrapper spans nest inside @observe spans automatically:
from twosignal.wrappers.openai import wrap_openai
from openai import OpenAI
client = wrap_openai(OpenAI())
@observe(span_type=SpanType.AGENT)
def my_agent(query):
# this LLM call becomes a child span with model, tokens, cost
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": query}],
)
return response.choices[0].message.contentResult:
my_agent (AGENT)
└── openai.chat.completions.create(gpt-4o) (LLM)Error Handling
If a decorated function raises an exception, the span records the error and re-raises it:
@observe
def risky_step(data):
if not data:
raise ValueError("No data provided")
return process(data)
# the ValueError propagates normally, but the span shows:
# status: ERROR
# error_message: "No data provided"Performance
The decorator adds approximately 50–100 microseconds of overhead per call (for capturing arguments and setting up context). The actual event flush happens in the background thread, so your function's execution time is unaffected.
For extremely hot loops (100k+ calls/sec), consider using ts.span() selectively instead of decorating every function.