LangChain Integration
Trace LangChain chains, LLM calls, tool invocations, and retriever queries using a callback handler. Every LangChain event becomes a 2Signal span.
Installation
pip install twosignal[langchain]Usage
from twosignal import TwoSignal
from twosignal.wrappers.langchain import TwoSignalCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
ts = TwoSignal()
handler = TwoSignalCallbackHandler()
llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{input}"),
])
chain = prompt | llm
result = chain.invoke(
{"input": "Hello"},
config={"callbacks": [handler]},
)What Gets Traced
| LangChain Event | Span Type | Details |
|---|---|---|
| LLM / Chat Model call | LLM | Model, messages, tokens, cost |
| Chain run | CHAIN | Chain name, input, output |
| Tool invocation | TOOL | Tool name, input, output |
| Retriever query | RETRIEVAL | Query, retrieved documents |
With Agents
from langchain.agents import create_openai_tools_agent, AgentExecutor
from langchain_core.tools import tool
@tool
def search(query: str) -> str:
"""Search the web."""
return "Results for: " + query
agent = create_openai_tools_agent(llm, [search], prompt)
executor = AgentExecutor(agent=agent, tools=[search])
# all agent steps are traced
result = executor.invoke(
{"input": "Search for AI testing tools"},
config={"callbacks": [handler]},
)With LCEL Chains
# callbacks work with any LCEL chain
chain = prompt | llm | output_parser
result = chain.invoke(
{"input": "Hello"},
config={"callbacks": [handler]},
)How It Works
The TwoSignalCallbackHandler implements LangChain's callback interface. It listens for start/end/error events and creates corresponding 2Signal spans. Spans are automatically nested — a tool call inside an agent run becomes a child span.