Skip to content

LangChain

Add observability to your LangChain application and see chain runs, LLM calls, and tool executions in Kopai. LangChain routes traces through LangSmith’s OpenTelemetry integration — the tracing layer built into LangSmith can export directly to any OTLP endpoint, so you do not need to instrument LangChain with the OpenTelemetry SDK yourself or run LangSmith’s cloud backend.

  • Python 3.11+
  • API key from a model provider (optional — this example falls back to a fake chat model)
  • Kopai running locally:
Terminal window
npx @kopai/app start

Install LangChain, the langsmith[otel] extra, and the Python OpenTelemetry SDK:

Terminal window
pip install \
"langchain" \
"langchain-community" \
"langchain-openai" \
"langsmith[otel]" \
"opentelemetry-api" \
"opentelemetry-sdk" \
"opentelemetry-exporter-otlp-proto-http"

The langsmith[otel] extra pulls in the OpenTelemetry bridge that converts LangSmith runs into OTel spans.

Create app.py. Before importing any LangChain module, set the LangSmith environment variables and initialize the OTel SDK so the exporter is ready when the first span is created:

import os
# Route LangChain traces through LangSmith's OpenTelemetry bridge,
# bypassing the LangSmith cloud backend entirely.
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_OTEL_ENABLED"] = "true"
os.environ["LANGSMITH_OTEL_ONLY"] = "true"
os.environ.setdefault("LANGSMITH_API_KEY", "dummy")
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
resource = Resource.create({"service.name": "my-langchain-app"})
provider = TracerProvider(resource=resource)
provider.add_span_processor(
SimpleSpanProcessor(OTLPSpanExporter(endpoint="http://localhost:4318/v1/traces"))
)
trace.set_tracer_provider(provider)

The three LANGSMITH_OTEL_* variables are the non-obvious part:

  • LANGSMITH_TRACING=true turns LangSmith tracing on
  • LANGSMITH_OTEL_ENABLED=true enables the OTel bridge
  • LANGSMITH_OTEL_ONLY=true skips the LangSmith cloud backend — traces go only to your OTel exporter
  • LANGSMITH_API_KEY must be set to any non-empty string (LangSmith validates its presence even in OTel-only mode)

Add a minimal chat loop at the bottom of app.py. This uses a fake chat model so you don’t need an API key, but you can swap it for ChatOpenAI whenever you want:

from langchain_community.chat_models.fake import FakeListChatModel
from langchain_core.messages import HumanMessage, SystemMessage
llm = FakeListChatModel(responses=[
"Hello! How can I help you today?",
"That's an interesting question.",
])
messages = [
SystemMessage(content="You are a helpful assistant."),
HumanMessage(content="Tell me about OpenTelemetry."),
]
response = llm.invoke(messages)
print(response.content)
# LangSmith batches runs on its own background thread before handing
# them to the OTel exporter. Give it a moment to drain before shutdown,
# otherwise runs get dropped when the exporter closes.
import time
time.sleep(2)
provider.force_flush()
provider.shutdown()

To use a real model, set your provider key and swap the model:

# export OPENAI_API_KEY=sk-...
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")

The exporter in app.py points at http://localhost:4318 — the OTLP endpoint exposed by the local Kopai instance you started with npx @kopai/app start. Run the script:

Terminal window
python app.py

LangSmith captures the chain run, converts it to OTel spans, and the SDK flushes them to your local Kopai on shutdown. Verify they arrived using the Kopai CLI:

Terminal window
# List recent traces from your app
npx @kopai/cli traces search --service my-langchain-app --json
# Inspect a specific trace (copy a traceId from above)
npx @kopai/cli traces get <traceId> --json

Each trace contains one span per LLM invocation, with the prompt and completion captured under the gen_ai.prompt and gen_ai.completion attributes. Wrap your calls in an LCEL chain (e.g. prompt | llm | parser) to produce multi-span traces with chain and tool spans as children.

Once it works locally, sending to Kopai Cloud is a one-line change. Go back to the OTLPSpanExporter you created in app.py and swap the local endpoint for the cloud endpoint plus an Authorization header:

# Replace the local exporter with this:
OTLPSpanExporter(
endpoint="https://otlp-http.kopai.app/v1/traces",
headers={"Authorization": "Bearer YOUR_BACKEND_TOKEN"},
)

Nothing else changes — the LangSmith env vars, the TracerProvider, the chain code, and the verification commands all stay the same. Re-run python app.py and your traces flow to the cloud instead of localhost.

For a complete working example with an interactive chat REPL:

LangChain Example