Send LangChain traces and logs to Bronto via OpenTelemetry
Instrument a LangChain Python application to send traces and logs to Bronto using OpenTelemetry auto-instrumentation and a local OTel Collector.
This page covers trace and log instrumentation for LangChain applications using the OpenTelemetry Python SDK. The opentelemetry-instrument wrapper automatically patches LangChain with zero code changes, exporting spans and logs to a local OTel Collector over OTLP/gRPC, which forwards them to Bronto.
If you don’t have a Collector and want to export directly from your application to Bronto, see Direct export to Bronto at the bottom of this page.
Bronto requires a dedicated exporter for traces (otlphttp/brontotraces) separate from the logs exporter. Create an otel-config.yaml file in the same directory as the binary:
Create a virtual environment and install the OpenTelemetry distribution alongside your LangChain provider package:
python3 -m venv .venvsource .venv/bin/activatepip install opentelemetry-distro opentelemetry-exporter-otlppip install opentelemetry-instrumentation-langchain# Install your LLM providerpip install langchain-openai # or langchain-anthropic# Auto-install instrumentation packages for all detected librariesopentelemetry-bootstrap -a install
Package
Purpose
opentelemetry-distro
OTel distribution — includes SDK and auto-instrumentation entry point
opentelemetry-exporter-otlp
OTLP gRPC/HTTP exporter
opentelemetry-instrumentation-langchain
Auto-patches LangChain with OTel spans
Always run opentelemetry-bootstrap -a install after adding new packages. It detects installed libraries and installs the matching OTel instrumentation packages automatically.
The example below creates a simple prompt → LLM → parser chain. No OTel imports are needed — the opentelemetry-instrument wrapper handles everything at runtime.
from langchain_openai import ChatOpenAIfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.output_parsers import StrOutputParserprompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant."), ("user", "{question}")])llm = ChatOpenAI(model="gpt-4o-mini")parser = StrOutputParser()chain = prompt | llm | parserresponse = chain.invoke({"question": "What is OpenTelemetry in one sentence?"})print(response)
Set the required environment variables and run your app through the opentelemetry-instrument wrapper. This requires no changes to your application code:
The LangChain instrumentation automatically captures spans for every step in the chain, following the OpenTelemetry Semantic Conventions for GenAI:
Span name
Description
ChatOpenAI.invoke / ChatAnthropic.invoke
LLM call duration, model name, token counts
ChatPromptTemplate.invoke
Prompt formatting step
RunnableSequence.invoke
Full chain execution
StrOutputParser.parse
Output parsing step
Captured attributes include gen_ai.usage.input_tokens, gen_ai.usage.output_tokens, gen_ai.request.model, and the full prompt and completion content for debugging.
After running your instrumented application, open the Search page in Bronto. Filter by the service name you set in OTEL_SERVICE_NAME — traces and logs should appear within a few seconds.If no data appears, check:
The OTel Collector is running and reachable on ports 4317 (gRPC) and 4318 (HTTP).
Your otel-config.yaml has separate exporters for traces (otlphttp/brontotraces) and logs (otlphttp/bronto) — both must be wired into their respective pipelines.
opentelemetry-bootstrap -a install was run after installing LangChain — without this step, the LangChain instrumentation package may not be installed.
For short-lived scripts, the batch exporter may not flush before the process exits. Add a shutdown call to force a final flush:
from opentelemetry.sdk.trace import TracerProviderprovider = trace.get_tracer_provider()if hasattr(provider, "shutdown"): provider.shutdown()
If you are not using an OTel Collector, export traces and logs directly from your application to Bronto by setting the environment variables to point at the Bronto ingestion endpoints: