Guide
Self-Hosting & Deployment
2Signal can be self-hosted on your own infrastructure. This guide covers the architecture, Docker setup, environment configuration, and operational considerations.
Architecture Overview
┌──────────────┐ ┌──────────────────────┐
│ Python SDK │────▶│ Next.js App (Web) │
│ (your agent)│ │ - Dashboard (tRPC) │
└──────────────┘ │ - REST API (/v1) │
│ - Auth middleware │
└──────┬───────────────┘
│
┌─────────────┼─────────────┐
▼ ▼ ▼
┌──────────┐ ┌───────────┐ ┌──────────┐
│ PostgreSQL│ │ Redis │ │ S3 │
│ (Supabase)│ │ (BullMQ) │ │ (raw │
│ │ │ │ │ events) │
└──────────┘ └─────┬─────┘ └──────────┘
│
┌─────▼──────┐
│ Workers │
│ - trace │
│ - score │
│ - eval │
│ - batch │
│ - alert │
│ - replay │
│ - cleanup │
└────────────┘Prerequisites
- PostgreSQL 14+ — primary data store (Supabase or standalone)
- Redis 6+ — BullMQ job queues and rate limiting
- Node.js 18+ — for the Next.js app and workers
- S3-compatible storage — raw event persistence (AWS S3, MinIO, R2)
- Supabase project — for authentication (Auth + anon key)
Docker Deployment
The project includes a multi-stage Dockerfile that builds a standalone Next.js output:
# Build and run the web app
docker build -t 2signal-web .
docker run -p 3000:3000 --env-file .env 2signal-web
# Run workers in a separate container
docker run --env-file .env 2signal-web node workers/index.jsDocker Compose
version: "3.8"
services:
postgres:
image: postgres:16
environment:
POSTGRES_USER: twosignal
POSTGRES_PASSWORD: your_password
POSTGRES_DB: twosignal
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:7-alpine
ports:
- "6379:6379"
web:
build: .
ports:
- "3000:3000"
env_file: .env
depends_on:
- postgres
- redis
workers:
build: .
command: node workers/index.js
env_file: .env
depends_on:
- postgres
- redis
volumes:
pgdata:Environment Variables
See the full Environment Variables reference for all required and optional variables. At minimum you need:
# Database
DATABASE_URL=postgresql://twosignal:password@localhost:5432/twosignal
# Redis
REDIS_URL=redis://localhost:6379
# Supabase Auth
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJ...
# S3 (raw event storage)
AWS_ACCESS_KEY_ID=your_key
AWS_SECRET_ACCESS_KEY=your_secret
S3_BUCKET_NAME=twosignal-traces
# LLM Judge (optional, for LLM evaluator)
OPENAI_API_KEY=sk-...Database Setup
# Install dependencies
pnpm install
# Push Prisma schema to your database
pnpm --filter db db:push
# Or run migrations for production
pnpm --filter db prisma migrate deployRunning Workers
Workers must run as a separate process. They handle async trace writing, score creation, evaluation execution, and data retention cleanup.
# Development
npx tsx apps/web/workers/index.ts
# Production (Docker)
node workers/index.jsWorker Types
| Worker | Queue | Purpose |
|---|---|---|
| Trace Writer | trace-writer | Writes traces/spans to PostgreSQL, computes cost rollups, increments usage |
| Score Writer | score-writer | Writes evaluation scores to PostgreSQL |
| Eval Runner | eval-runner | Runs enabled evaluators against new traces, enqueues alert-checker |
| Batch Eval Runner | batch-eval-runner | Runs evaluators across dataset items for batch evaluation (concurrency: 3) |
| Alert Checker | alert-checker | Evaluates alert rules after eval completion, delivers via email/Slack/webhook |
| Trace Replay | trace-replay | Re-executes LLM spans with model/prompt overrides (concurrency: 3) |
| Retention Cleanup | retention-cleanup | Deletes data older than plan retention period (scheduled daily) |
Data Flow
- SDK sends events to
POST /api/v1/traces - API persists raw events to S3 (durable storage)
- API enqueues events to Redis (BullMQ)
- Trace Writer dequeues and writes to PostgreSQL
- Trace Writer triggers Eval Runner for new traces
- Eval Runner loads project evaluators and creates scores
- Alert Checker evaluates alert rules and fires notifications if thresholds are crossed
Production Considerations
Scaling
- Web app — stateless, scale horizontally behind a load balancer
- Workers — scale by running multiple worker containers. BullMQ handles job distribution.
- Redis — single instance is sufficient for most workloads. Use Redis Cluster or Upstash for high availability.
- PostgreSQL — add read replicas if dashboard queries become slow. Write path goes through workers.
Monitoring
- Check
GET /api/v1/healthfor app liveness - Monitor Redis queue depths for worker backpressure
- Track PostgreSQL connection pool utilization
- Set up alerts on the retention cleanup cron to ensure it runs daily
Backups
- PostgreSQL: regular pg_dump or managed provider snapshots
- S3: enable versioning for raw event durability
- Redis: persistence not required (BullMQ jobs are transient; data of record is in PostgreSQL)
Pointing the SDK at Your Instance
from twosignal import TwoSignal
client = TwoSignal(
api_key="2s_live_your_key",
base_url="https://your-2signal-instance.com",
)Or set the environment variable:
export TWOSIGNAL_BASE_URL=https://your-2signal-instance.com