TypeScript SDK
The 2Signal TypeScript SDK instruments your Node.js AI agents with automatic tracing, token counting, and cost tracking. Node.js 18+. Includes wrappers for OpenAI, Anthropic, and Vercel AI SDK.
Installation
npm install twosignalQuick Start
import { TwoSignal, observe } from "twosignal";
import OpenAI from "openai";
import { wrapOpenAI } from "twosignal/openai";
const ts = new TwoSignal(); // reads TWOSIGNAL_API_KEY from env
const client = wrapOpenAI(new OpenAI());
const supportAgent = observe(async (query: string) => {
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: query }],
});
return response.choices[0].message.content;
}, { name: "support-agent", spanType: "AGENT" });
const answer = await supportAgent("What is your refund policy?");
await ts.shutdown();This creates a trace with two spans: an AGENT span for support-agent and a child LLM span for the OpenAI call — with model, tokens, cost, input, and output captured automatically.
Initialization
import { TwoSignal, getInstance } from "twosignal";
const ts = new TwoSignal({
apiKey: "ts_...", // or TWOSIGNAL_API_KEY env var
baseUrl: "https://...", // or TWOSIGNAL_BASE_URL env var
enabled: true, // set false to disable all tracing
flushInterval: 1000, // milliseconds between batch flushes
maxBatchSize: 100, // max events per batch
});
// the constructor sets the global instance automatically
const instance = getInstance();The SDK is a singleton — only one instance should be created. It starts a background interval that batches and flushes events. The interval is cleared on shutdown.
Configuration
| Parameter | Env Var | Default | Description |
|---|---|---|---|
apiKey | TWOSIGNAL_API_KEY | — | Your project API key |
baseUrl | TWOSIGNAL_BASE_URL | http://localhost:3000 | API endpoint |
enabled | — | true | Enable/disable all tracing |
flushInterval | — | 1000 | Milliseconds between background flushes |
maxBatchSize | — | 100 | Max events per HTTP request |
Creating Spans
const span = ts.span({
name: "retrieve-docs",
spanType: "RETRIEVAL",
input: { query },
metadata: { db: "pinecone", topK: 5 },
});
const result = await span.run(async (s) => {
const docs = await vectorDb.search(query);
s.output = docs;
s.metadata = { resultCount: docs.length };
return docs;
});You can set these fields on the span inside run():
| Field | Description |
|---|---|
s.output | Span output (any JSON-serializable value) |
s.metadata | Arbitrary metadata object |
s.model | Model name (for LLM spans) |
s.modelParameters | { temperature, max_tokens, top_p } |
s.usage | { promptTokens, completionTokens, totalTokens } |
s.cost | Cost in USD (number) |
s.status | "OK" or "ERROR" |
s.errorMessage | Error description string |
Observe Decorator
Wrap any function with observe to automatically create a span for each call:
import { observe } from "twosignal";
// wrap a named function
const traced = observe(myFunction, {
name: "custom-name",
spanType: "TOOL",
});
// or pass options first
const traced = observe({ name: "custom-name" }, myFunction);
// function name is used as default span name
const traced = observe(myFunction);
const result = await traced(input);Context Propagation
The SDK uses Node.js AsyncLocalStorage to propagate trace context through async call chains. Nested spans automatically become children of their parent:
const outer = ts.span({ name: "outer", spanType: "AGENT" });
await outer.run(async () => {
// this span automatically becomes a child of "outer"
const inner = ts.span({ name: "inner", spanType: "TOOL" });
await inner.run(async () => {
// nested correctly — same trace ID, parent set automatically
});
});Provider Wrappers
OpenAI
import OpenAI from "openai";
import { wrapOpenAI } from "twosignal/openai";
const client = wrapOpenAI(new OpenAI());
// automatically traced — captures model, tokens, cost, input, output
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
temperature: 0.7,
max_tokens: 100,
});Anthropic
import Anthropic from "@anthropic-ai/sdk";
import { wrapAnthropic } from "twosignal/anthropic";
const client = wrapAnthropic(new Anthropic());
// automatically traced (non-streaming only)
const message = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello" }],
});Vercel AI SDK
import { generateText, streamText } from "ai";
import { traceGenerateText, traceStreamText } from "twosignal/vercel-ai";
const tracedGenerate = traceGenerateText(generateText);
const result = await tracedGenerate({
model: openai("gpt-4o"),
prompt: "Hello",
});
const tracedStream = traceStreamText(streamText);
const stream = await tracedStream({
model: anthropic("claude-sonnet-4-20250514"),
messages: [{ role: "user", content: "Hello" }],
});Span Types
| Type | Use for | Example |
|---|---|---|
AGENT | Top-level agent function | Your main agent entrypoint |
LLM | LLM API calls | OpenAI, Anthropic, etc. |
TOOL | Tool / function calls | Web search, calculator, API call |
RETRIEVAL | RAG / vector search | Pinecone, Chroma, Weaviate |
CHAIN | Pipeline steps | Prompt template, output parser |
CUSTOM | Everything else | Default if not specified |
Cost Calculation
import { calculateCost } from "twosignal";
const cost = calculateCost("gpt-4o", 100, 50);
// returns number | null (null if model not in pricing table)Provider wrappers call this automatically. Supports OpenAI, Anthropic, Google Gemini, Mistral, Cohere, and Groq models.
Lifecycle
// force-flush all pending events
await ts.flush();
// graceful shutdown — flushes and stops the background interval
await ts.shutdown();In long-running servers (Express, Fastify, Next.js API routes), you typically don't need to call shutdown(). For scripts and one-off jobs, call it before exit to ensure all events are sent.