Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.bronto.io/llms.txt

Use this file to discover all available pages before exploring further.

When to Use Fluent Bit on EKS

Fluent Bit as an EKS DaemonSet is a good fit when you want to:
  • Collect container logs from all pods on every node without modifying application code
  • Keep the setup simple — Fluent Bit is lighter and easier to configure than a full OTel Collector when you only need logs
  • Avoid CloudWatch log ingestion fees
If you also need traces from your Kubernetes workloads, use the Self-Managed OTel Collector or ADOT instead, as Fluent Bit does not handle trace data.

Supported AWS Services

Fluent Bit on EKS collects logs from containers running in:
ServiceLog type
Amazon EKS (EC2 node groups)Pod stdout / stderr, enriched with Kubernetes metadata
Amazon EKS (Fargate)Pod stdout / stderr (via Fargate-built-in log router)
For ECS container logs, use ECS FireLens. For non-Kubernetes AWS services, see the overview.

How it Works

A Fluent Bit DaemonSet runs one pod per node. Each pod reads container logs from the node’s /var/log/containers/ path, enriches them with Kubernetes metadata (pod name, namespace, labels), and forwards them to Bronto using JSON Lines over HTTPS. For the full Fluent Bit configuration reference and output options, see the Fluent Bit setup guide.

Bronto Ingestion Endpoint

Fluent Bit’s http output sends JSON Lines to the Bronto base endpoint (no path):
RegionEndpoint
EUingestion.eu.bronto.io
USingestion.us.bronto.io
This is different from the OTLP endpoints (/v1/logs, /v1/traces), which accept only protobuf and require an OTel-compatible agent. Fluent Bit’s http output uses JSON Lines and must target the base endpoint.
All requests require the header:
x-bronto-api-key: <YOUR_API_KEY>
See API Keys for how to generate a key.

Setup

Step 1 — Deploy Fluent Bit via Helm

The easiest way to deploy Fluent Bit on EKS is via the official Helm chart.
helm repo add fluent https://fluent.github.io/helm-charts
helm repo update

helm install fluent-bit fluent/fluent-bit \
  --namespace monitoring \
  --create-namespace \
  --values values.yaml

Step 2 — Configure the Bronto output

Create a values.yaml file that configures Fluent Bit to tail container logs and forward them to Bronto:
config:
  inputs: |
    [INPUT]
        Name              tail
        Path              /var/log/containers/*.log
        multiline.parser  docker, cri
        Tag               kube.*
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On

  filters: |
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Merge_Log           On
        Keep_Log            Off
        K8S-Logging.Parser  On
        K8S-Logging.Exclude On

  outputs: |
    [OUTPUT]
        Name              http
        Match             kube.*
        Host              ingestion.<REGION>.bronto.io
        Port              443
        Format            json_lines
        Compress          gzip
        tls               On
        tls.verify        On
        Header            x-bronto-api-key <YOUR_API_KEY>
        Header            x-bronto-collection <YOUR_CLUSTER_NAME>

tolerations:
  - key: node-role.kubernetes.io/master
    operator: Exists
    effect: NoSchedule
Replace <REGION> with eu or us, <YOUR_API_KEY> with your Bronto API key, and <YOUR_CLUSTER_NAME> with a name that identifies this cluster in Bronto (e.g. cluster1-prod-eu-west-1).

Step 3 — Verify

After deployment, logs should appear in Bronto Search. You can filter by collection to isolate logs from this cluster.
kubectl get pods -n monitoring -l app.kubernetes.io/name=fluent-bit
All pods should show Running status.

Data Organization

Set the recommended headers in your Fluent Bit [OUTPUT] block to control how data lands in Bronto — see Data Organization for how datasets, collections, and tags work.
HeaderDescription
x-bronto-datasetDataset to ingest into
x-bronto-collectionCollection name (typically your cluster name)
x-bronto-tagsComma-separated tags to attach to events
[OUTPUT]
    Name              http
    Match             kube.*
    Host              ingestion.<REGION>.bronto.io
    Port              443
    Format            json_lines
    Compress          gzip
    tls               On
    tls.verify        On
    Header            x-bronto-api-key <YOUR_API_KEY>
    Header            x-bronto-dataset <YOUR_DATASET_NAME>
    Header            x-bronto-collection <YOUR_CLUSTER_NAME>
    Header            x-bronto-tags env=prod,team=platform
By default, Bronto can also infer the dataset from Kubernetes metadata (container name, pod labels). To route logs from specific namespaces or pods to different datasets, add a rewrite_tag filter and configure multiple [OUTPUT] blocks with different Match patterns and x-bronto-dataset headers. See the Fluent Bit setup guide for full routing examples.

Cost Notes

  • No CloudWatch ingestion fees — logs go directly from Fluent Bit to Bronto.
  • Fluent Bit is lightweight; the DaemonSet pods typically consume less than 50MB memory per node.

EKS Fargate

EKS Fargate works differently from EKS on EC2 node groups. There are no customer-managed nodes and the node filesystem isn’t exposed to pods, so the DaemonSet pattern described above doesn’t apply. AWS instead provides a built-in Fluent Bit log router, configured via a ConfigMap in the aws-observability namespace, that captures stdout / stderr from every Fargate pod cluster-wide. The constraint that shapes this section is that the Fargate built-in Fluent Bit accepts only a fixed allowlist of output plugins:
  • cloudwatch / cloudwatch_logs
  • firehose / kinesis_firehose
  • kinesis
  • es (Elasticsearch / OpenSearch)
The http plugin — which would let Fluent Bit POST directly to Bronto’s ingestion endpoint — is not on this allowlist, and any ConfigMap referencing it is rejected at admission time. This is a known AWS platform limitation tracked by containers-roadmap#1242, open since 2021. It affects every third-party log destination equally, not just Bronto. The restriction applies only to the AWS-managed Fluent Bit router. Your own OpenTelemetry Collector — running as an ordinary Kubernetes Deployment in the cluster — has no such restriction and can send to any HTTP endpoint, including Bronto. There are three viable paths for Fargate pod logs into Bronto. Most teams use a combination during and after migration.

Path A — Fluent Bit → Kinesis Firehose → Bronto (Preview)

The Fargate built-in Fluent Bit writes to a Kinesis Firehose delivery stream, which forwards records to Bronto.
EKS Fargate Pods → Built-in Fluent Bit (kinesis_firehose) → Kinesis Firehose → Bronto
This is the most direct path available given Fargate’s constraints — a single intermediate hop, both ends managed services. It uses Bronto’s Kinesis Firehose integration, currently in preview; contact your Bronto representative for onboarding. Example ConfigMap output block:
[OUTPUT]
    Name              kinesis_firehose
    Match             kube.*
    region            <AWS_REGION>
    delivery_stream   <BRONTO_FIREHOSE_STREAM>
Customers already running a Firehose pipeline (for example, currently writing logs to S3 or OpenSearch) can typically reuse the same Firehose stream by reconfiguring its destination, with no changes to the in-cluster Fluent Bit configuration.

Path B — Fluent Bit → CloudWatch Logs → Bronto

The Fargate built-in Fluent Bit writes to a CloudWatch log group, and the Bronto CloudWatch Log Forwarder ingests events from that group.
EKS Fargate Pods → Built-in Fluent Bit (cloudwatch_logs) → CloudWatch Logs → Bronto Forwarder → Bronto
This option is generally available today and requires no preview enrollment. The trade-off is the additional CloudWatch ingestion cost, which can be significant at high log volume. Path A is generally preferred once Firehose preview access is available. Example ConfigMap output block:
[OUTPUT]
    Name              cloudwatch_logs
    Match             kube.*
    region            <AWS_REGION>
    log_group_name    /aws/eks/<CLUSTER_NAME>/fargate
    log_stream_prefix from-fluent-bit-
    auto_create_group true

Path C — Application → OTel Collector → Bronto

Applications emit log records over OTLP using the OpenTelemetry logs SDK, sending them directly to your existing OTel Collector. The collector then forwards to Bronto via OTLP/HTTP — the same exporter pattern used for traces.
Application (OTel logs SDK) → OTel Collector → Bronto
This path is the recommended direction for new services and for existing services where instrumentation is feasible. It bypasses the Fluent Bit allowlist entirely, gives trace-to-log correlation automatically, and uses the same service.name / service.namespace routing as traces. See Self-Managed OTel Collector for collector configuration and the OpenTelemetry SDK guides for per-language instrumentation. Example collector logs pipeline (append otlphttp/bronto-logs to any existing logs pipeline):
exporters:
  otlphttp/bronto-logs:
    logs_endpoint: "https://ingestion.<REGION>.bronto.io/v1/logs"
    headers:
      x-bronto-api-key: "${env:BRONTO_API_KEY}"
      x-bronto-collection: "<COLLECTION_NAME>"

service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [memory_limiter, batch]
      exporters: [otlphttp/<existing>, otlphttp/bronto-logs]
Path C is not a fit for third-party container images you can’t modify, short-lived init containers and jobs, runtimes where the OTel logs SDK is still maturing, or applications under heavy change-control. Workloads in those categories continue on Path A or Path B indefinitely — both remain fully supported.

Running multiple destinations during migration

The Fargate ConfigMap accepts multiple [OUTPUT] blocks, so logs can be sent to Bronto and an existing destination in parallel during cutover:
output.conf: |
    # Existing destination — kept active during migration
    [OUTPUT]
        Name              kinesis_firehose
        Match             kube.*
        region            <AWS_REGION>
        delivery_stream   <EXISTING_FIREHOSE_STREAM>

    # New destination — Bronto via a separate Firehose stream
    [OUTPUT]
        Name              kinesis_firehose
        Match             kube.*
        region            <AWS_REGION>
        delivery_stream   <BRONTO_FIREHOSE_STREAM>
For Path C, the equivalent is listing multiple exporters in the OTel Collector’s logs pipeline.

Traces and metrics on Fargate

Traces from Fargate workloads go through your in-cluster OTel Collector exactly as on EC2 node groups — add Bronto as an OTLP/HTTP exporter on the traces pipeline. See ADOT or Self-Managed OTel Collector for collector deployment guidance. OpenTelemetry metrics support is currently in customer preview. Once available, metrics can be added to your existing collector pipeline using the same exporter pattern as traces. Contact your Bronto representative for early access.

Summary

SignalPathStatus
Logs (A)Fluent Bit (built-in) → Kinesis Firehose → BrontoPreview
Logs (B)Fluent Bit (built-in) → CloudWatch Logs → BrontoGA
Logs (C)Application (OTel logs SDK) → OTel Collector → BrontoGA where applications can be instrumented
TracesApplication → OTel Collector → BrontoGA
MetricsApplication → OTel Collector → BrontoPreview
Recommended starting point on EKS Fargate:
  1. Add Bronto as an additional exporter on your existing OTel Collector (Path C) for traces, and for logs from services already using the OTel logs SDK.
  2. Stand up a Fluent Bit log path in parallel — Path A if Firehose preview is available, otherwise Path B — and send to both your existing destination and Bronto until you’ve validated coverage.
  3. Adopt Path C incrementally for new services and high-value existing services. Workloads that can’t be instrumented stay on Path A or Path B.

For assistance, contact support@bronto.io.