POST /api/v1/otel/traces
Accepts traces in OTLP/JSON format. Teams already using OpenTelemetry can send traces to 2Signal without changing their instrumentation — just point your OTel Collector or SDK at this endpoint.
Request
POST /api/v1/otel/traces
Authorization: Bearer ts_...
Content-Type: application/json
{
"resourceSpans": [
{
"resource": {
"attributes": [
{ "key": "service.name", "value": { "stringValue": "my-agent" } }
]
},
"scopeSpans": [
{
"scope": { "name": "openai-instrumentation", "version": "1.0.0" },
"spans": [
{
"traceId": "0af7651916cd43dd8448eb211c80319c",
"spanId": "b7ad6b7169203331",
"parentSpanId": "",
"name": "ChatCompletion",
"kind": 3,
"startTimeUnixNano": "1700000000000000000",
"endTimeUnixNano": "1700000002000000000",
"attributes": [
{ "key": "gen_ai.system", "value": { "stringValue": "openai" } },
{ "key": "gen_ai.request.model", "value": { "stringValue": "gpt-4o" } },
{ "key": "gen_ai.usage.input_tokens", "value": { "intValue": 150 } },
{ "key": "gen_ai.usage.output_tokens", "value": { "intValue": 50 } }
],
"status": { "code": 1 }
}
]
}
]
}
]
}curl Example
curl -X POST https://api.2signal.dev/api/v1/otel/traces \
-H "Authorization: Bearer ts_your_api_key" \
-H "Content-Type: application/json" \
-d @otlp-export.jsonResponse (200)
{ "accepted": 1 }The accepted count reflects how many OTel spans were queued for processing.
How It Works
The endpoint converts OTLP spans into 2Signal's internal trace and span format, then feeds them into the same processing pipeline as the native POST /api/v1/traces endpoint. This means evaluators, alerts, and dashboards work identically for OTel-sourced traces.
- OTel spans sharing a
traceIdare grouped into a single 2Signal Trace - The root span name becomes the trace name
- Resource attributes (e.g.
service.name) are preserved as trace metadata and tags
Attribute Mapping
2Signal automatically extracts structured data from OpenTelemetry semantic convention attributes:
| OTel Attribute | 2Signal Field |
|---|---|
gen_ai.request.model / gen_ai.response.model | Span model |
gen_ai.usage.input_tokens | Prompt tokens |
gen_ai.usage.output_tokens | Completion tokens |
gen_ai.usage.cost | Span cost |
gen_ai.prompt | Span input |
gen_ai.completion | Span output |
service.name (resource) | Trace tag (service:name) |
Span Type Inference
2Signal infers the span type from OTel attributes. You can also set it explicitly:
| Condition | 2Signal Span Type |
|---|---|
2signal.span.type attribute set | Uses the explicit value (AGENT, TOOL, LLM, etc.) |
gen_ai.* or llm.* attributes present | LLM |
tool.name or rpc.method attribute present | TOOL |
db.system or db.statement attribute present | RETRIEVAL |
| None of the above | CUSTOM |
ID Conversion
OTel uses hex-encoded trace IDs (32 chars) and span IDs (16 chars). 2Signal stores IDs as UUIDs. The conversion is deterministic — the same OTel ID always maps to the same UUID.
OTel Collector Configuration
To route your OTel Collector to 2Signal, add an OTLP/HTTP exporter:
# otel-collector-config.yaml
exporters:
otlphttp/2signal:
endpoint: https://api.2signal.dev/api/v1/otel
headers:
authorization: "Bearer ts_your_api_key"
service:
pipelines:
traces:
exporters: [otlphttp/2signal]OTel SDK Configuration (Python)
You can also send traces directly from the OpenTelemetry Python SDK:
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
exporter = OTLPSpanExporter(
endpoint="https://api.2signal.dev/api/v1/otel/traces",
headers={"authorization": "Bearer ts_your_api_key"},
)
provider.add_span_processor(BatchSpanProcessor(exporter))OTel SDK Configuration (Node.js)
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
const exporter = new OTLPTraceExporter({
url: "https://api.2signal.dev/api/v1/otel/traces",
headers: { authorization: "Bearer ts_your_api_key" },
});Encoding
This endpoint accepts OTLP/JSON only (application/json). Protobuf encoding (application/x-protobuf) is not currently supported. If you're using the OTel Collector, make sure to use the otlphttp exporter (not otlp/grpc).
Limits
- Max 5 MB total payload
- Same rate limits and monthly usage quotas as the native traces endpoint
- Auth, billing, and evaluators all work identically
Error Responses
| Status | When |
|---|---|
400 | Invalid OTLP JSON structure |
401 | Invalid or missing API key |
413 | Payload exceeds 5 MB |
415 | Protobuf content type (use JSON instead) |
429 | Rate limit exceeded |