TypeScript SDK

The 2Signal TypeScript SDK instruments your Node.js AI agents with automatic tracing, token counting, and cost tracking. Node.js 18+. Includes wrappers for OpenAI, Anthropic, and Vercel AI SDK.

Installation

npm install twosignal

Quick Start

import { TwoSignal, observe } from "twosignal";
import OpenAI from "openai";
import { wrapOpenAI } from "twosignal/openai";

const ts = new TwoSignal(); // reads TWOSIGNAL_API_KEY from env
const client = wrapOpenAI(new OpenAI());

const supportAgent = observe(async (query: string) => {
  const response = await client.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: query }],
  });
  return response.choices[0].message.content;
}, { name: "support-agent", spanType: "AGENT" });

const answer = await supportAgent("What is your refund policy?");

await ts.shutdown();

This creates a trace with two spans: an AGENT span for support-agent and a child LLM span for the OpenAI call — with model, tokens, cost, input, and output captured automatically.

Initialization

import { TwoSignal, getInstance } from "twosignal";

const ts = new TwoSignal({
  apiKey: "ts_...",            // or TWOSIGNAL_API_KEY env var
  baseUrl: "https://...",      // or TWOSIGNAL_BASE_URL env var
  enabled: true,               // set false to disable all tracing
  flushInterval: 1000,         // milliseconds between batch flushes
  maxBatchSize: 100,           // max events per batch
});

// the constructor sets the global instance automatically
const instance = getInstance();

The SDK is a singleton — only one instance should be created. It starts a background interval that batches and flushes events. The interval is cleared on shutdown.

Configuration

ParameterEnv VarDefaultDescription
apiKeyTWOSIGNAL_API_KEYYour project API key
baseUrlTWOSIGNAL_BASE_URLhttp://localhost:3000API endpoint
enabledtrueEnable/disable all tracing
flushInterval1000Milliseconds between background flushes
maxBatchSize100Max events per HTTP request

Creating Spans

const span = ts.span({
  name: "retrieve-docs",
  spanType: "RETRIEVAL",
  input: { query },
  metadata: { db: "pinecone", topK: 5 },
});

const result = await span.run(async (s) => {
  const docs = await vectorDb.search(query);
  s.output = docs;
  s.metadata = { resultCount: docs.length };
  return docs;
});

You can set these fields on the span inside run():

FieldDescription
s.outputSpan output (any JSON-serializable value)
s.metadataArbitrary metadata object
s.modelModel name (for LLM spans)
s.modelParameters{ temperature, max_tokens, top_p }
s.usage{ promptTokens, completionTokens, totalTokens }
s.costCost in USD (number)
s.status"OK" or "ERROR"
s.errorMessageError description string

Observe Decorator

Wrap any function with observe to automatically create a span for each call:

import { observe } from "twosignal";

// wrap a named function
const traced = observe(myFunction, {
  name: "custom-name",
  spanType: "TOOL",
});

// or pass options first
const traced = observe({ name: "custom-name" }, myFunction);

// function name is used as default span name
const traced = observe(myFunction);

const result = await traced(input);

Context Propagation

The SDK uses Node.js AsyncLocalStorage to propagate trace context through async call chains. Nested spans automatically become children of their parent:

const outer = ts.span({ name: "outer", spanType: "AGENT" });
await outer.run(async () => {
  // this span automatically becomes a child of "outer"
  const inner = ts.span({ name: "inner", spanType: "TOOL" });
  await inner.run(async () => {
    // nested correctly — same trace ID, parent set automatically
  });
});

Provider Wrappers

OpenAI

import OpenAI from "openai";
import { wrapOpenAI } from "twosignal/openai";

const client = wrapOpenAI(new OpenAI());

// automatically traced — captures model, tokens, cost, input, output
const response = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello" }],
  temperature: 0.7,
  max_tokens: 100,
});

Anthropic

import Anthropic from "@anthropic-ai/sdk";
import { wrapAnthropic } from "twosignal/anthropic";

const client = wrapAnthropic(new Anthropic());

// automatically traced (non-streaming only)
const message = await client.messages.create({
  model: "claude-sonnet-4-20250514",
  max_tokens: 1024,
  messages: [{ role: "user", content: "Hello" }],
});

Vercel AI SDK

import { generateText, streamText } from "ai";
import { traceGenerateText, traceStreamText } from "twosignal/vercel-ai";

const tracedGenerate = traceGenerateText(generateText);
const result = await tracedGenerate({
  model: openai("gpt-4o"),
  prompt: "Hello",
});

const tracedStream = traceStreamText(streamText);
const stream = await tracedStream({
  model: anthropic("claude-sonnet-4-20250514"),
  messages: [{ role: "user", content: "Hello" }],
});

Span Types

TypeUse forExample
AGENTTop-level agent functionYour main agent entrypoint
LLMLLM API callsOpenAI, Anthropic, etc.
TOOLTool / function callsWeb search, calculator, API call
RETRIEVALRAG / vector searchPinecone, Chroma, Weaviate
CHAINPipeline stepsPrompt template, output parser
CUSTOMEverything elseDefault if not specified

Cost Calculation

import { calculateCost } from "twosignal";

const cost = calculateCost("gpt-4o", 100, 50);
// returns number | null (null if model not in pricing table)

Provider wrappers call this automatically. Supports OpenAI, Anthropic, Google Gemini, Mistral, Cohere, and Groq models.

Lifecycle

// force-flush all pending events
await ts.flush();

// graceful shutdown — flushes and stops the background interval
await ts.shutdown();

In long-running servers (Express, Fastify, Next.js API routes), you typically don't need to call shutdown(). For scripts and one-off jobs, call it before exit to ensure all events are sent.

Have questions? Join our community!

Connect with other developers and the 2Signal team.

Join Discord