Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.bronto.io/llms.txt

Use this file to discover all available pages before exploring further.

When to Use a Self-Managed Collector

A self-managed OpenTelemetry Collector is a good fit when you need:
  • Full configuration control — custom processors, attribute transformation, or sampling rules
  • Fan-out — forwarding the same telemetry to multiple backends simultaneously
  • Existing OTel infrastructure — you already operate a collector fleet and want to add Bronto as an exporter
  • Advanced routing — sending different log types or services to different Bronto datasets or collections
If you want a simpler, AWS-managed setup without manual configuration, consider ADOT instead.
Application logs and traces can be sent directly to Bronto’s ingestion endpoints, but routing through a collector is strongly recommended. The collector handles retries on failure and batches and compresses payloads, improving reliability and reducing cost.

Supported AWS Services

A self-managed Collector can run anywhere you control compute, and supports any source that has an OpenTelemetry receiver:
Service / SourceData types
Amazon EC2Application logs, traces, metrics, and host metrics
Amazon ECS / FargateApplication logs and traces (sidecar or dedicated task)
Amazon EKSApplication logs and traces (DaemonSet or Deployment)
Any AWS workload reachable from the CollectorLogs / traces via OTLP, file, Kafka, Kinesis, Pub/Sub, and other OTel receivers
Use this method when ADOT or the Bronto-managed forwarders don’t cover your scenario. For Lambda traces specifically, the ADOT Lambda Layer is simpler.

Bronto Ingestion Endpoints

SignalEU RegionUS Region
Logshttps://ingestion.eu.bronto.io/v1/logshttps://ingestion.us.bronto.io/v1/logs
Traceshttps://ingestion.eu.bronto.io/v1/traceshttps://ingestion.us.bronto.io/v1/traces
The /v1/logs and /v1/traces endpoints accept OTLP protobuf only. The OTel Collector handles protobuf serialisation automatically — you do not need to configure this manually. Do not send plain JSON or HTTP requests directly to these paths.
All requests require the header:
x-bronto-api-key: <YOUR_API_KEY>
See API Keys for how to generate a key.

Collector Configuration

The core Bronto exporter configuration is the same regardless of how you deploy the collector on AWS. For full installation instructions, platform-specific configuration, and parameter reference, see the OpenTelemetry Collector guide. The minimal configuration to export both logs and traces to Bronto:
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

processors:
  batch:

exporters:
  otlphttp/bronto:
    logs_endpoint: "https://ingestion.<REGION>.bronto.io/v1/logs"
    traces_endpoint: "https://ingestion.<REGION>.bronto.io/v1/traces"
    compression: none
    headers:
      x-bronto-api-key: <YOUR_API_KEY>

service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/bronto]
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/bronto]
Replace <REGION> with eu or us and <YOUR_API_KEY> with your Bronto API key.

Deployment Options on AWS

EC2

Install the OpenTelemetry Collector directly on your EC2 instances following the official installation guide. Place your config at /etc/otel/config.yaml and run the collector as a systemd service.

ECS (Sidecar or Daemon)

Run the collector as a sidecar container alongside each application task, or as a daemon service on each ECS instance. Use the standard otel/opentelemetry-collector-contrib image and mount your config via S3 or SSM Parameter Store.

EKS (DaemonSet or Deployment)

Deploy as a DaemonSet (one collector per node) for node-level log collection, or as a Deployment behind a ClusterIP service for centralised collection. Store your config in a ConfigMap and reference it as a volume mount.
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-config
  namespace: monitoring
data:
  config.yaml: |
    # paste your collector config here

Data Organization

Bronto maps service.name to a Dataset and service.namespace to a Collection — see Data Organization for how datasets, collections, and tags work. Set these as resource attributes in your SDK or collector config:
processors:
  resource:
    attributes:
      - key: service.name
        value: <YOUR_DATASET_NAME>
        action: upsert
      - key: service.namespace
        value: <YOUR_COLLECTION_NAME>
        action: upsert
You can also override routing per-exporter using HTTP headers, which is useful when one collector ships to multiple datasets:
HeaderDescription
x-bronto-datasetOverrides service.name
x-bronto-collectionOverrides service.namespace
x-bronto-tagsComma-separated tags to attach to events
exporters:
  otlphttp/bronto:
    logs_endpoint: https://ingestion.<REGION>.bronto.io/v1/logs
    headers:
      x-bronto-api-key: <YOUR_API_KEY>
      x-bronto-dataset: <YOUR_DATASET_NAME>
      x-bronto-collection: <YOUR_COLLECTION_NAME>
      x-bronto-tags: env=prod,team=platform
For Kubernetes, set x-bronto-collection to your cluster name (e.g. cluster1-prod-eu-west-1) to identify the source cluster in Bronto.
For assistance or questions, contact support@bronto.io.