LangChain
Integration
Section titled “Integration”Add observability to your LangChain application and see chain runs, LLM calls, and tool executions in Kopai. LangChain routes traces through LangSmith’s OpenTelemetry integration — the tracing layer built into LangSmith can export directly to any OTLP endpoint, so you do not need to instrument LangChain with the OpenTelemetry SDK yourself or run LangSmith’s cloud backend.
Prerequisites
Section titled “Prerequisites”- Python 3.11+
- API key from a model provider (optional — this example falls back to a fake chat model)
- Kopai running locally:
npx @kopai/app startInstall dependencies
Section titled “Install dependencies”Install LangChain, the langsmith[otel] extra, and the Python OpenTelemetry SDK:
pip install \ "langchain" \ "langchain-community" \ "langchain-openai" \ "langsmith[otel]" \ "opentelemetry-api" \ "opentelemetry-sdk" \ "opentelemetry-exporter-otlp-proto-http"The langsmith[otel] extra pulls in the OpenTelemetry bridge that converts LangSmith runs into OTel spans.
Configure LangSmith and OpenTelemetry
Section titled “Configure LangSmith and OpenTelemetry”Create app.py. Before importing any LangChain module, set the LangSmith environment variables and initialize the OTel SDK so the exporter is ready when the first span is created:
import os
# Route LangChain traces through LangSmith's OpenTelemetry bridge,# bypassing the LangSmith cloud backend entirely.os.environ["LANGSMITH_TRACING"] = "true"os.environ["LANGSMITH_OTEL_ENABLED"] = "true"os.environ["LANGSMITH_OTEL_ONLY"] = "true"os.environ.setdefault("LANGSMITH_API_KEY", "dummy")
from opentelemetry import tracefrom opentelemetry.sdk.resources import Resourcefrom opentelemetry.sdk.trace import TracerProviderfrom opentelemetry.sdk.trace.export import SimpleSpanProcessorfrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
resource = Resource.create({"service.name": "my-langchain-app"})provider = TracerProvider(resource=resource)provider.add_span_processor( SimpleSpanProcessor(OTLPSpanExporter(endpoint="http://localhost:4318/v1/traces")))trace.set_tracer_provider(provider)The three LANGSMITH_OTEL_* variables are the non-obvious part:
LANGSMITH_TRACING=trueturns LangSmith tracing onLANGSMITH_OTEL_ENABLED=trueenables the OTel bridgeLANGSMITH_OTEL_ONLY=trueskips the LangSmith cloud backend — traces go only to your OTel exporterLANGSMITH_API_KEYmust be set to any non-empty string (LangSmith validates its presence even in OTel-only mode)
Run a chain
Section titled “Run a chain”Add a minimal chat loop at the bottom of app.py. This uses a fake chat model so you don’t need an API key, but you can swap it for ChatOpenAI whenever you want:
from langchain_community.chat_models.fake import FakeListChatModelfrom langchain_core.messages import HumanMessage, SystemMessage
llm = FakeListChatModel(responses=[ "Hello! How can I help you today?", "That's an interesting question.",])
messages = [ SystemMessage(content="You are a helpful assistant."), HumanMessage(content="Tell me about OpenTelemetry."),]response = llm.invoke(messages)print(response.content)
# LangSmith batches runs on its own background thread before handing# them to the OTel exporter. Give it a moment to drain before shutdown,# otherwise runs get dropped when the exporter closes.import timetime.sleep(2)provider.force_flush()provider.shutdown()To use a real model, set your provider key and swap the model:
# export OPENAI_API_KEY=sk-...from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-4o-mini")Run and verify locally
Section titled “Run and verify locally”The exporter in app.py points at http://localhost:4318 — the OTLP endpoint exposed by the local Kopai instance you started with npx @kopai/app start. Run the script:
python app.pyLangSmith captures the chain run, converts it to OTel spans, and the SDK flushes them to your local Kopai on shutdown. Verify they arrived using the Kopai CLI:
# List recent traces from your appnpx @kopai/cli traces search --service my-langchain-app --json
# Inspect a specific trace (copy a traceId from above)npx @kopai/cli traces get <traceId> --jsonEach trace contains one span per LLM invocation, with the prompt and completion captured under the gen_ai.prompt and gen_ai.completion attributes. Wrap your calls in an LCEL chain (e.g. prompt | llm | parser) to produce multi-span traces with chain and tool spans as children.
Sending to Kopai.app in the cloud
Section titled “Sending to Kopai.app in the cloud”Once it works locally, sending to Kopai Cloud is a one-line change. Go back to the OTLPSpanExporter you created in app.py and swap the local endpoint for the cloud endpoint plus an Authorization header:
# Replace the local exporter with this:OTLPSpanExporter( endpoint="https://otlp-http.kopai.app/v1/traces", headers={"Authorization": "Bearer YOUR_BACKEND_TOKEN"},)Nothing else changes — the LangSmith env vars, the TracerProvider, the chain code, and the verification commands all stay the same. Re-run python app.py and your traces flow to the cloud instead of localhost.
Working Example
Section titled “Working Example”For a complete working example with an interactive chat REPL: