Skip to main content
This page covers trace and log instrumentation for LangChain applications using the OpenTelemetry Python SDK. The opentelemetry-instrument wrapper automatically patches LangChain with zero code changes, exporting spans and logs to a local OTel Collector over OTLP/gRPC, which forwards them to Bronto.
If you don’t have a Collector and want to export directly from your application to Bronto, see Direct export to Bronto at the bottom of this page.

Prerequisites

  • Python 3.8 or later
  • pip
  • An LLM provider API key (e.g. OpenAI or Anthropic)
  • A running OTel Collector configured to forward traces and logs to Bronto — see Connect Open Telemetry to Bronto

Install the OTel Collector (macOS)

Download and unpack the binary for your chip:
curl --proto '=https' --tlsv1.2 -fOL https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.147.0/otelcol_0.147.0_darwin_arm64.tar.gz
tar -xvf otelcol_0.147.0_darwin_arm64.tar.gz
On macOS, Gatekeeper may block the binary on first run. If so, go to System Settings → Privacy & Security and click Allow.

Configure the OTel Collector

Bronto requires a dedicated exporter for traces (otlphttp/brontotraces) separate from the logs exporter. Create an otel-config.yaml file in the same directory as the binary:
otel-config.yaml
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

processors:
  batch:

exporters:
  otlphttp/brontotraces:
    traces_endpoint: "https://ingestion.<REGION>.bronto.io/v1/traces"
    compression: none
    headers:
      x-bronto-api-key: <YOUR_API_KEY>

  otlphttp/bronto:
    logs_endpoint: "https://ingestion.<REGION>.bronto.io/v1/logs"
    compression: none
    headers:
      x-bronto-api-key: <YOUR_API_KEY>
      x-bronto-service-name: langchain-sample-app
      x-bronto-collection: local-dev

  debug:
    verbosity: detailed

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/brontotraces, debug]
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/bronto, debug]
Replace <REGION> with your Bronto region (e.g. eu or us) and <YOUR_API_KEY> with a key from Account Management → API Keys.
HeaderDescription
x-bronto-api-keyYour Bronto ingestion API key
x-bronto-datasetMaps to the dataset name in Bronto
x-bronto-collectionGroups related services — e.g. local-dev, production
Start the Collector and leave it running in its own terminal:
./otelcol --config otel-config.yaml

Install dependencies

Create a virtual environment and install the OpenTelemetry distribution alongside your LangChain provider package:
python3 -m venv .venv
source .venv/bin/activate

pip install opentelemetry-distro opentelemetry-exporter-otlp
pip install opentelemetry-instrumentation-langchain

# Install your LLM provider
pip install langchain-openai      # or langchain-anthropic

# Auto-install instrumentation packages for all detected libraries
opentelemetry-bootstrap -a install
PackagePurpose
opentelemetry-distroOTel distribution — includes SDK and auto-instrumentation entry point
opentelemetry-exporter-otlpOTLP gRPC/HTTP exporter
opentelemetry-instrumentation-langchainAuto-patches LangChain with OTel spans
Always run opentelemetry-bootstrap -a install after adding new packages. It detects installed libraries and installs the matching OTel instrumentation packages automatically.

Create a LangChain application

The example below creates a simple prompt → LLM → parser chain. No OTel imports are needed — the opentelemetry-instrument wrapper handles everything at runtime.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{question}")
])

llm = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

chain = prompt | llm | parser

response = chain.invoke({"question": "What is OpenTelemetry in one sentence?"})
print(response)

Run with auto-instrumentation

Set the required environment variables and run your app through the opentelemetry-instrument wrapper. This requires no changes to your application code:
export OTEL_SERVICE_NAME="langchain-sample-app"
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"
export OTEL_EXPORTER_OTLP_PROTOCOL="grpc"
export OTEL_TRACES_EXPORTER="otlp"
export OTEL_LOGS_EXPORTER="otlp"
export OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"   # or ANTHROPIC_API_KEY

opentelemetry-instrument python app.py
Environment variableValueDescription
OTEL_SERVICE_NAMElangchain-sample-appIdentifies your service in traces
OTEL_EXPORTER_OTLP_ENDPOINThttp://localhost:4317Local Collector gRPC endpoint
OTEL_EXPORTER_OTLP_PROTOCOLgrpcTransport protocol
OTEL_TRACES_EXPORTERotlpEnables trace export
OTEL_LOGS_EXPORTERotlpEnables log export

What gets captured

The LangChain instrumentation automatically captures spans for every step in the chain, following the OpenTelemetry Semantic Conventions for GenAI:
Span nameDescription
ChatOpenAI.invoke / ChatAnthropic.invokeLLM call duration, model name, token counts
ChatPromptTemplate.invokePrompt formatting step
RunnableSequence.invokeFull chain execution
StrOutputParser.parseOutput parsing step
Captured attributes include gen_ai.usage.input_tokens, gen_ai.usage.output_tokens, gen_ai.request.model, and the full prompt and completion content for debugging.

Add custom spans (optional)

Auto-instrumentation does not prevent you from adding your own spans. Reference the global tracer to wrap business logic with custom context:
from opentelemetry import trace
from opentelemetry.trace import Status, StatusCode

tracer = trace.get_tracer(__name__)

def run_chain(question: str):
    with tracer.start_as_current_span("my_app.run_chain") as span:
        span.set_attribute("app.question", question)
        try:
            result = chain.invoke({"question": question})
            span.set_attribute("app.answer_length", len(result))
            span.set_status(Status(StatusCode.OK))
            return result
        except Exception as e:
            span.set_status(Status(StatusCode.ERROR, str(e)))
            span.record_exception(e)
            raise

Verify in Bronto

After running your instrumented application, open the Search page in Bronto. Filter by the service name you set in OTEL_SERVICE_NAME — traces and logs should appear within a few seconds. If no data appears, check:
  • The OTel Collector is running and reachable on ports 4317 (gRPC) and 4318 (HTTP).
  • Your otel-config.yaml has separate exporters for traces (otlphttp/brontotraces) and logs (otlphttp/bronto) — both must be wired into their respective pipelines.
  • opentelemetry-bootstrap -a install was run after installing LangChain — without this step, the LangChain instrumentation package may not be installed.
  • For short-lived scripts, the batch exporter may not flush before the process exits. Add a shutdown call to force a final flush:
from opentelemetry.sdk.trace import TracerProvider

provider = trace.get_tracer_provider()
if hasattr(provider, "shutdown"):
    provider.shutdown()

Direct export to Bronto

If you are not using an OTel Collector, export traces and logs directly from your application to Bronto by setting the environment variables to point at the Bronto ingestion endpoints:
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="https://ingestion.eu.bronto.io/v1/traces"
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://ingestion.eu.bronto.io/v1/logs"
export OTEL_EXPORTER_OTLP_PROTOCOL="http/protobuf"
export OTEL_EXPORTER_OTLP_HEADERS="x-bronto-api-key=<YOUR_API_KEY>"

opentelemetry-instrument python app.py
RegionTraces endpointLogs endpoint
EUhttps://ingestion.eu.bronto.io/v1/traceshttps://ingestion.eu.bronto.io/v1/logs
UShttps://ingestion.us.bronto.io/v1/traceshttps://ingestion.us.bronto.io/v1/logs
See API Keys for how to create a key with ingestion permissions. No other changes to the rest of the setup are required.