POST /api/v1/route-model
Select the optimal model for a given input based on your routing rules. Routes by complexity, token count, or keywords — no LLM calls needed, so it adds near-zero latency.
Request
POST /api/v1/route-model
Authorization: Bearer ts_...
Content-Type: application/json
{
"input": "What is your refund policy?",
"configName": "production-router"
}curl Example
curl -X POST https://api.2signal.dev/api/v1/route-model \
-H "Authorization: Bearer ts_your_api_key" \
-H "Content-Type: application/json" \
-d '{"input": "What is your refund policy?"}'Parameters
| Field | Type | Required | Description |
|---|---|---|---|
input | string | Yes | Text to analyze for routing |
configName | string | No | Routing config to use (default config if omitted) |
Response (200)
{
"model": "gpt-4o-mini",
"configName": "production-router",
"complexity": 0.23
}| Field | Description |
|---|---|
model | The selected model name |
configName | Which routing config was used |
complexity | Computed complexity score (0–1) for the input |
Routing Rules
Configure routing rules in the dashboard or via the CLI. Each rule has a condition type:
| Condition | Description | Example |
|---|---|---|
complexity | Routes based on a 0–1 complexity score (heuristic) | If complexity > 0.7 → use gpt-4o |
token_count | Routes based on estimated input token count | If tokens > 2000 → use gpt-4o |
keyword | Routes if input contains specific keywords | If contains "code" → use gpt-4o |
always | Always matches — use as a fallback rule | Default → gpt-4o-mini |
Rule Evaluation
- Rules are evaluated in priority order (highest priority first)
- The first matching rule determines the model
- If no rule matches, the config's default model is used
Complexity Scoring
The complexity score is computed heuristically based on:
- Input length
- Vocabulary diversity
- Sentence structure
- Presence of technical terms, code, or multi-step instructions
No LLM calls are involved — the scoring is deterministic and takes under 1ms.
Example: Using in Your Agent
import httpx
def route_and_call(query: str) -> str:
# 1. Get the optimal model
route = httpx.post(
"https://api.2signal.dev/api/v1/route-model",
headers={"Authorization": "Bearer ts_..."},
json={"input": query},
).json()
# 2. Call the selected model
response = openai_client.chat.completions.create(
model=route["model"],
messages=[{"role": "user", "content": query}],
)
return response.choices[0].message.contentError Responses
| Status | When |
|---|---|
400 | Missing input field |
401 | Invalid or missing API key |
404 | Routing config not found |