Cookbook
Auto-Instrument LLM Calls
2Signal provides thin wrappers around popular LLM provider clients. Wrap the client once and every call is automatically traced — no manual span creation required. The wrappers capture the model name, input messages, output text, token usage (prompt / completion / total), and estimated cost.
Python + OpenAI
import twosignal
from openai import OpenAI
client = twosignal.TwoSignal(api_key="your-api-key")
openai_client = twosignal.wrap_openai(OpenAI())
@client.observe(name="my-agent", span_type="AGENT")
def ask(question: str) -> str:
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": question}],
)
return response.choices[0].message.content
result = ask("What is 2Signal?")
client.flush()Python + Anthropic
import twosignal
from anthropic import Anthropic
client = twosignal.TwoSignal(api_key="your-api-key")
anthropic_client = twosignal.wrap_anthropic(Anthropic())
@client.observe(name="my-agent", span_type="AGENT")
def ask(question: str) -> str:
response = anthropic_client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": question}],
)
return response.content[0].text
result = ask("What is 2Signal?")
client.flush()TypeScript + OpenAI
import { TwoSignal, wrapOpenAI, observe } from "twosignal";
import OpenAI from "openai";
const ts = new TwoSignal({ apiKey: "your-api-key" });
const openai = wrapOpenAI(new OpenAI());
const ask = observe({ name: "my-agent", spanType: "AGENT" }, async (question: string) => {
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: question }],
});
return response.choices[0].message.content;
});
await ask("What is 2Signal?");
ts.flush();What Gets Captured
Every wrapped LLM call produces a span with the following fields populated automatically:
| Field | Description |
|---|---|
model | Model name (e.g., gpt-4o, claude-sonnet-4-20250514) |
input | Input messages array sent to the provider |
output | Generated text or full response object |
usage.prompt_tokens | Number of input tokens |
usage.completion_tokens | Number of output tokens |
usage.total_tokens | Sum of prompt and completion tokens |
cost | Estimated cost in USD based on model pricing |
Other Supported Providers
The Python SDK ships with wrappers for many additional providers. Each follows the same pattern — wrap the client and all calls are traced automatically.
The TypeScript SDK supports OpenAI, Anthropic, and Vercel AI SDK.
What's Next
Now that your LLM calls are traced, set up evaluators to automatically score every response: Evaluate Agent Outputs.