Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.bronto.io/llms.txt

Use this file to discover all available pages before exploring further.

When to Use ADOT

ADOT is a good fit when you are running containerised workloads on ECS or EKS and want to:
  • Send application logs and traces to Bronto in a single pipeline
  • Avoid CloudWatch log ingestion fees entirely
  • Use an AWS-supported, pre-built distribution of the OpenTelemetry Collector without managing it yourself
If you need full control over collector configuration — custom processors, fan-out to multiple backends, or advanced transformation — consider the Self-Managed OTel Collector instead.

Supported AWS Services

ADOT collects application logs and traces from containerised workloads on:
ServiceData types
Amazon ECS (EC2 launch type)Application logs and traces (sidecar)
AWS Fargate (ECS)Application logs and traces (sidecar)
Amazon EKS (EC2 node groups)Application logs and traces (DaemonSet)
Amazon EKS (Fargate)Application logs and traces (sidecar)
For Lambda traces, use the ADOT Lambda Layer. For non-containerised AWS services, see the overview.

What is ADOT?

The AWS Distribution for OpenTelemetry (ADOT) is an AWS-supported build of the OpenTelemetry Collector. It is available as a Docker image and can be deployed as a sidecar container (ECS) or a daemonset (EKS). It collects logs and traces from your application via OTLP and forwards them to any OTLP-compatible backend — including Bronto. ADOT itself is free. You pay only for the compute running the container.

Bronto Ingestion Endpoints

SignalEU RegionUS Region
Logshttps://ingestion.eu.bronto.io/v1/logshttps://ingestion.us.bronto.io/v1/logs
Traceshttps://ingestion.eu.bronto.io/v1/traceshttps://ingestion.us.bronto.io/v1/traces
The /v1/logs and /v1/traces endpoints accept OTLP protobuf only. The OTel Collector (including ADOT) handles protobuf serialisation automatically — you do not need to configure this manually. Do not send plain JSON or HTTP requests directly to these paths.
All requests require the header:
x-bronto-api-key: <YOUR_API_KEY>
See API Keys for how to generate a key.

Setup on ECS (Sidecar)

Add the ADOT container as a sidecar to your ECS task definition. Your application sends telemetry to the sidecar over OTLP/gRPC (localhost:4317) or OTLP/HTTP (localhost:4318), and the sidecar forwards it to Bronto.

ADOT Collector Configuration

Create a collector config file and make it available to the container (via S3, SSM Parameter Store, or a custom image).
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

processors:
  batch:

exporters:
  otlphttp/bronto:
    logs_endpoint: "https://ingestion.<REGION>.bronto.io/v1/logs"
    traces_endpoint: "https://ingestion.<REGION>.bronto.io/v1/traces"
    compression: none
    headers:
      x-bronto-api-key: <YOUR_API_KEY>

service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/bronto]
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/bronto]
Replace <REGION> with eu or us and <YOUR_API_KEY> with your Bronto API key.

ECS Task Definition (sidecar container)

{
  "name": "adot-collector",
  "image": "public.ecr.aws/aws-observability/aws-otel-collector:latest",
  "essential": false,
  "command": ["--config", "/etc/otel/config.yaml"],
  "portMappings": [
    { "containerPort": 4317, "protocol": "tcp" },
    { "containerPort": 4318, "protocol": "tcp" }
  ]
}
Your application container should export to http://localhost:4317 (gRPC) or http://localhost:4318 (HTTP).

Setup on EKS (DaemonSet)

Deploy ADOT as a DaemonSet so one collector pod runs per node. Application pods export telemetry to the node’s collector via the node IP or a local service.
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: adot-collector
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: adot-collector
  template:
    metadata:
      labels:
        app: adot-collector
    spec:
      containers:
        - name: adot-collector
          image: public.ecr.aws/aws-observability/aws-otel-collector:latest
          args: ["--config", "/etc/otel/config.yaml"]
          volumeMounts:
            - name: otel-config
              mountPath: /etc/otel
      volumes:
        - name: otel-config
          configMap:
            name: adot-config
Store the collector configuration above in a ConfigMap named adot-config.

Data Organization

ADOT uses OpenTelemetry resource attributes for routing. Bronto maps service.name to a Dataset and service.namespace to a Collection — see Data Organization for how datasets, collections, and tags work. Set these in the resource block of your collector config:
processors:
  resource:
    attributes:
      - key: service.name
        value: <YOUR_DATASET_NAME>
        action: upsert
      - key: service.namespace
        value: <YOUR_COLLECTION_NAME>
        action: upsert
You can also override routing per-exporter using HTTP headers, which is useful when one collector ships to multiple datasets:
HeaderDescription
x-bronto-datasetOverrides service.name
x-bronto-collectionOverrides service.namespace
x-bronto-tagsComma-separated tags to attach to events
exporters:
  otlphttp/bronto:
    logs_endpoint: https://ingestion.<REGION>.bronto.io/v1/logs
    headers:
      x-bronto-api-key: <YOUR_API_KEY>
      x-bronto-dataset: <YOUR_DATASET_NAME>
      x-bronto-collection: <YOUR_COLLECTION_NAME>
      x-bronto-tags: env=prod,team=platform

Cost Notes

  • No CloudWatch ingestion fees — logs and traces go directly from ADOT to Bronto over OTLP/HTTP.
  • Compute cost only — you pay for the ECS task or EKS pod running the collector, which is typically small relative to CloudWatch PUT and ingestion charges at scale.
  • Compare with the CloudWatch Log Forwarder if your services already write to CloudWatch and the migration cost outweighs the savings.

For assistance, contact support@bronto.io.