Who This Guide Is For

This guide is for engineers and observability teams who rely on Datadog and want to get more out of their existing setup — without adding operational overhead, complexity, or risk.

You might be here because:

  • You’re happy with Datadog’s real-time workflows, but want a more cost-effective way to retain logs for weeks, months, or even years
  • Your team needs long-term access to business-critical log data without dealing with cold storage or rehydration delays
  • You’re forced to drop or exclude valuable logs — not because they’re low priority, but because you’re working within budget constraints that limit visibility and slow down troubleshooting
  • You’re looking to reduce redundant tooling by complementing Datadog with a purpose-built system for deep search and long-term log access

Bronto is designed to work alongside Datadog. With a small config change, you can route a copy of your logs to Bronto and keep everything else exactly the same — alerts, dashboards, workflows, team habits. There’s no re-architecture, no surprises. Just more visibility for less effort.


1. Getting Logs into Bronto

Bronto is designed to complement your existing Datadog setup, allowing you to forward logs to both platforms seamlessly. By configuring the Datadog Agent to send a copy of your logs to Bronto, you can maintain your current workflows while extending your log retention and search capabilities.

Step 1: Generate a Bronto API Key

  1. Log in to your Bronto account.
  2. Navigate to Settings > API Keys.
  3. Click on Create API Key, provide a name for the key, and copy it to a secure location.

Step 2: Configure the Datadog Agent

In your Datadog Agent configuration file, add:

logs_config:
  use_http: true
  additional_endpoints:
  - api_key: "YOUR-API-KEY"
    host: "ingestion.<REGION>.bronto.io"
    port: 443

Replace YOUR-BRONTO-API-KEY with your key and <REGION> with us or eu.

Step 3: Restart the Agent and Confirm Logs Are Arriving

Visit your Bronto workspace and go to the Search page to verify that logs are being ingested.

Other Integration Options

If you’re not using the Datadog Agent — or you’re planning to send data directly to Bronto — we support a number of other ingestion methods.

A common alternative is Fluent Bit, which offers flexible configuration and wide plugin support.

For a full list of supported agents and integrations, visit our Agent Setup documentation.


2. Bronto’s Data Model: Collections, Datasets, and Metadata

Bronto uses a simple and flexible model to organize your logs:

  • Datasets represent individual log streams, typically aligned to a service or log type (e.g., payments-api, nginx)
  • Collections group related datasets together — such as by environment, team, or function (e.g., production, web-tier)

You define both using standard log metadata:

  • service.name → Dataset
  • service.namespace → Collection
  • environment, version → searchable context

If metadata is missing, logs are assigned to default buckets — but can be reclassified later.

Bronto’s data model maps naturally to how you already tag data in Datadog, making organization and query scoping easy.


3. Search That Just Works

Bronto’s search is built to feel familiar. If you’ve used Datadog, Kibana, or Splunk, you’ll be right at home.

Just type a keyword or key-value pair. Bronto autocompletes based on your existing metadata like service.name, env, and version. There’s no need to preselect a dataset or learn a new syntax. It just works.

One thing that might feel unfamiliar is that you don’t have to choose which logs to index. In Bronto, all logs are indexed and searchable by default.

For advanced use cases, Bronto supports SQL-like queries, but most users won’t need them to get started.

See full search syntax and tips in our docs


4. Tagging and Metadata in Bronto

Bronto automatically picks up the tags and metadata you’re already using in Datadog. If your logs include fields like service, env, or version, they’re available in Bronto out of the box and ready to use in search.

You can also add or adjust tags via UI or API as needed.


5. Which Workflows to Mirror — and Which to Leave in Datadog

You don’t need to move everything to Bronto all at once. Most teams start gradually by focusing on the logs that benefit most from longer retention and deep search — and expand as they see value.

Mirror in Bronto:

  • Long-term investigations
  • Security and compliance logs
  • Logs you currently drop for cost reasons
  • Anything useful beyond Datadog’s retention window

Leave in Datadog:

  • Real-time alerts and monitors
  • SLO dashboards
  • Live tail and active triage tools

Bronto is your long-term log platform, optimized for cost-effective storage and high-volume ingest. Datadog remains the real-time nervous system. Together, they give you flexibility across the entire lifecycle of your observability data without duplicating effort or increasing complexity.


6. Correlating Metrics in Datadog with Logs in Bronto

If your teams rely on Datadog dashboards to monitor metrics or traces, Bronto can serve as the next step in your investigation workflow — giving you fast, searchable access to logs even after they’ve aged out of Datadog’s retention window.

The easiest way to connect Datadog to Bronto is with context links:

These allow you to add a custom link to Datadog graphs, monitors, or dashboards that opens a Bronto search page filtered to the relevant service, environment, and time range.

How to set it up:

  1. In Datadog, go to Dashboards > Context Links and create a new link.
  2. Set the display name to something like “View logs in Bronto.”
  3. Use the following URL format (adjusting for your Bronto region — us, eu, etc.):

https://app.<REGION>.bronto.io/search?query=[...]

  1. Map Datadog template variables:

    • $service to your service tag
    • $env to your environment tag
    • $fromTs and $toTs to timestamps provided by the Datadog time window
  2. Save and apply the link to relevant dashboards or monitors.

When someone clicks the link in Datadog, Bronto will open to the correct time range and service context — no manual filtering required.

Datadog context link docs
Bronto search syntax


7. Retention Strategy: Keep 3 Days in Datadog, 1 Year in Bronto

The most popular Bronto usage pattern is splitting log retention across tools:

  • Datadog: 3–5 days of logs for alerts and real-time workflows
  • Bronto: 90 days, 1 year, or more for deep investigations, audits, and searchability

You still use Datadog as you do today, but now you can:

  • Retain more logs overall
  • Reduce gaps and blind spots
  • Avoid expensive cold storage and rehydration delays

Bronto also makes an excellent destination for high-volume logs that aren’t enriched at the source, such as CDN logs, edge access logs, and other external-facing data.

These logs are typically used not for proactive monitoring, but for retrospective investigations, like:

  • Reconstructing user behavior
  • Debugging edge-case failures
  • Verifying the impact of a regional outage

By routing those logs directly to Bronto, you free up budget and capacity in Datadog for the parts of your stack that truly benefit from its real-time capabilities — such as application metrics, alerts, and dashboards.

Bronto extends your visibility, simplifies your retention strategy, and helps you get more out of the tools you already love.

If you’re not sure where to start, begin by routing one high-volume log type — such as CDN access logs — into Bronto and see what it unlocks.


Appendix: Quick Reference Mappings

ConceptIn DatadogIn Bronto
Log groupingIndexCollection + Dataset
Log sourceserviceservice.name
Environmentenvenv
Versionversionversion
Search/filter syntaxkey:valuekey:value
AutocompleteTag-basedAttribute and tag-based
Retention extensionCold storage or rehydrationNative, hot, searchable