Skip to main content
This page covers instrumenting a Python application with the OpenTelemetry SDK to send logs and traces to Bronto over OTLP/HTTP via a local OTel Collector. Logs are bridged from Python’s standard logging module — no log statement changes needed. Traces are added by configuring a tracer provider alongside the log provider.
If you don’t have a Collector and want to export directly from your application to Bronto, see Direct export to Bronto at the bottom of this page.

Prerequisites

Install dependencies

pip install \
  opentelemetry-api \
  opentelemetry-sdk \
  opentelemetry-exporter-otlp-proto-http
PackagePurpose
opentelemetry-apiCore OTel API
opentelemetry-sdkSDK — LoggerProvider, LoggingHandler, processors
opentelemetry-exporter-otlp-proto-httpOTLP/HTTP exporter

Configure the log bridge

The OTel Python SDK ships a LoggingHandler that bridges Python’s standard logging module into OTel. Attach it to the root logger (or any specific logger) and all log records flow through the OTel pipeline.
import logging
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor

logger_provider = LoggerProvider()

# Attach to the root logger — all loggers in the process inherit this handler
handler = LoggingHandler(level=logging.NOTSET, logger_provider=logger_provider)
logging.getLogger().addHandler(handler)
logging.getLogger().setLevel(logging.INFO)
Pass a specific logger name to logging.getLogger("my_app") if you only want to instrument part of your application.

Configure the OTLP exporter

Wire a BatchLogRecordProcessor and OTLPLogExporter into the LoggerProvider. By default the OTel Collector listens for OTLP/HTTP on port 4318 — no authentication is needed here since the Collector handles the connection to Bronto.
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter

exporter = OTLPLogExporter(
    endpoint="http://localhost:4318/v1/logs",
)

logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))
If your Collector runs on a different host or port, update the endpoint accordingly.

Set resource attributes

Resource attributes are attached to every log record exported from this process. Two attributes drive how Bronto organises incoming logs:
OTel attributeBronto conceptDescription
service.nameDatasetGroups logs from one service
service.namespaceCollectionGroups related services or a team’s services
Pass a Resource when constructing the LoggerProvider:
from opentelemetry.sdk.resources import Resource

resource = Resource.create({
    "service.name": "my-service",           # → Bronto dataset
    "service.namespace": "my-team",         # → Bronto collection
    "deployment.environment": "production",
})

logger_provider = LoggerProvider(resource=resource)

Complete example

The snippet below puts all the pieces together. Copy it into your application’s startup code and call it once before your first log statement.
configure_logging.py
import logging
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter

def configure_otel_logging():
    resource = Resource.create({
        "service.name": "my-service",
        "service.namespace": "my-team",
        "deployment.environment": "production",
    })

    exporter = OTLPLogExporter(
        endpoint="http://localhost:4318/v1/logs",
    )

    logger_provider = LoggerProvider(resource=resource)
    logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))

    handler = LoggingHandler(level=logging.NOTSET, logger_provider=logger_provider)
    logging.getLogger().addHandler(handler)
    logging.getLogger().setLevel(logging.INFO)


if __name__ == "__main__":
    configure_otel_logging()

    logger = logging.getLogger(__name__)
    logger.info("Application started")
    logger.warning("Low disk space", extra={"disk_free_gb": 2.1})
    logger.error("Database connection failed", exc_info=True)

Verify log collection

After running your application, open the Search page in Bronto. Filter by the dataset name you set in service.name — your log records should appear within a few seconds. If no logs appear, check:
  • The OTel Collector is running and reachable at the configured endpoint.
  • The Collector’s pipeline includes a logs pipeline with an otlp receiver and the Bronto exporter — see Connect Open Telemetry to Bronto.
  • BatchLogRecordProcessor exports on a background thread — make sure your process does not exit before the first flush. For short-lived scripts, replace it with SimpleLogRecordProcessor to export synchronously.
Synchronous export for scripts
from opentelemetry.sdk._logs.export import SimpleLogRecordProcessor

# Replace BatchLogRecordProcessor with SimpleLogRecordProcessor
logger_provider.add_log_record_processor(SimpleLogRecordProcessor(exporter))

Traces

For Django, Flask, FastAPI, SQLAlchemy, requests, and other popular libraries, the opentelemetry-instrument CLI wrapper auto-instruments your app at startup with no code changes — install opentelemetry-instrumentation and opentelemetry-instrumentation-<framework> (e.g. opentelemetry-instrumentation-flask), then run your app via opentelemetry-instrument python app.py. See Python auto-instrumentation for setup and the full list of supported libraries.

Install tracing dependencies

No additional packages are needed — opentelemetry-sdk and opentelemetry-exporter-otlp-proto-http already include tracing support.

Configure the tracer provider

Create a TracerProvider using the same Resource you built for logging, then register it as the global tracer provider.
configure_tracing.py
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

def configure_otel_tracing(resource):
    exporter = OTLPSpanExporter(
        endpoint="http://localhost:4318/v1/traces",
    )
    provider = TracerProvider(resource=resource)
    provider.add_span_processor(BatchSpanProcessor(exporter))
    trace.set_tracer_provider(provider)
Sharing the Resource ensures service.name and service.namespace are identical on both logs and traces.

Creating spans

Get a tracer from the global provider and wrap operations in spans:
tracer = trace.get_tracer("my-service")

with tracer.start_as_current_span("process-payment") as span:
    span.set_attribute("payment.amount", 99.99)
    span.set_attribute("payment.currency", "USD")
    logger.info("Payment processed")  # trace_id and span_id injected automatically
Any log emitted inside an active span automatically receives the trace_id and span_id — no manual propagation needed.

Complete example

app.py
import logging
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

resource = Resource.create({
    "service.name": "my-service",
    "service.namespace": "my-team",
    "deployment.environment": "production",
})

# Logs
log_exporter = OTLPLogExporter(endpoint="http://localhost:4318/v1/logs")
logger_provider = LoggerProvider(resource=resource)
logger_provider.add_log_record_processor(BatchLogRecordProcessor(log_exporter))
handler = LoggingHandler(level=logging.NOTSET, logger_provider=logger_provider)
logging.getLogger().addHandler(handler)
logging.getLogger().setLevel(logging.INFO)

# Traces
trace_exporter = OTLPSpanExporter(endpoint="http://localhost:4318/v1/traces")
tracer_provider = TracerProvider(resource=resource)
tracer_provider.add_span_processor(BatchSpanProcessor(trace_exporter))
trace.set_tracer_provider(tracer_provider)

logger = logging.getLogger(__name__)
tracer = trace.get_tracer("my-service")

with tracer.start_as_current_span("handle-request") as span:
    span.set_attribute("http.method", "GET")
    logger.info("Handling request")  # trace_id + span_id attached automatically

Direct export to Bronto

If you are not using an OTel Collector, export directly to Bronto by replacing the exporter endpoints and adding your API key:
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

log_exporter = OTLPLogExporter(
    endpoint="https://ingestion.eu.bronto.io/v1/logs",  # or ingestion.us.bronto.io
    headers={"x-bronto-api-key": "<YOUR_API_KEY>"},
)

trace_exporter = OTLPSpanExporter(
    endpoint="https://ingestion.eu.bronto.io/v1/traces",  # or ingestion.us.bronto.io
    headers={"x-bronto-api-key": "<YOUR_API_KEY>"},
)
RegionLogs endpointTraces endpoint
EUhttps://ingestion.eu.bronto.io/v1/logshttps://ingestion.eu.bronto.io/v1/traces
UShttps://ingestion.us.bronto.io/v1/logshttps://ingestion.us.bronto.io/v1/traces
See API Keys for how to create a key with ingestion permissions. No other changes to the rest of the setup are required.