Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.bronto.io/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Bronto exposes two distinct endpoint types. Using the wrong one is the most common cause of 400 errors — confirm which applies to your agent before configuring.
We recommend routing logs and traces through a log forwarder (Fluent Bit, Logstash, Vector) or the OpenTelemetry Collector rather than sending HTTP requests directly. Forwarders and the collector handle batching, compression, and retries on failure — improving reliability and reducing cost — and can parse or transform common log formats before they reach Bronto. Direct HTTP is supported for cases where running an agent isn’t practical.
EndpointAcceptsTypical agents
ingestion.eu.bronto.io / ingestion.us.bronto.ioAny format — JSON, syslog, logfmt, CEF, raw textFluent Bit, Logstash, FireLens, EventBridge, custom HTTP
ingestion.eu.bronto.io/v1/logs / ingestion.us.bronto.io/v1/logsOTLP protobuf onlyOpenTelemetry Collector
ingestion.eu.bronto.io/v1/traces / ingestion.us.bronto.io/v1/tracesOTLP protobuf onlyOpenTelemetry Collector
All endpoints authenticate with the same header:
x-bronto-api-key: <YOUR_API_KEY>
See API Keys for how to generate a key.

Base Endpoint

The base endpoint (no path) accepts any payload format and performs no schema validation. It is the right choice for any agent that sends JSON, plain text, or structured log lines over HTTP.

Automatically parsed formats

For the following formats, Bronto detects and extracts fields automatically — no configuration required.
FormatFields extracted
Structured JSON (Logrus, Winston, etc.)All top-level key-value fields
Nested JSONNested objects flattened to dot notation — e.g. context.user, context.request_id
RFC 5424 Syslogpri, facility, severity, hostname, appname, procid, msgid, timestamp, version
LogfmtAll key=value pairs — e.g. level, msg, user, duration. Also parsed when logfmt appears as the value of a JSON message field
GELFversion, host, short_message, level, and underscore-prefixed custom fields
Log4j2 JSONlevel, loggerName, message, thread, instant.epochSecond, instant.nanoOfSecond, nested thrown fields
CEFCEF version and extension key-value pairs — e.g. src, dst, suser

Formats stored verbatim

Other formats — RFC 3164 BSD syslog, Apache Common Log, Docker JSON, raw text — are accepted and stored as-is. Most log forwarders and the OpenTelemetry Collector can parse or transform these into structured fields before sending. If you are sending data directly, use the custom parser to define extraction rules.

OTLP Endpoints

The /v1/logs and /v1/traces endpoints accept OTLP protobuf only over HTTPS on port 443. For trace ingestion, the expected path is through the OpenTelemetry Collector — it handles OTLP protobuf serialisation, batching, and retry automatically. Sending traces without a collector is unsupported.
Trace data must go to /v1/traces. Sending traces to the base endpoint stores them as log events — trace correlation and the traces UI will not work.

Direct HTTP Custom Ingestion

If running a log forwarder or collector isn’t practical, you can POST log events directly to the base endpoint from any HTTP client or script.

Request format

All requests must be POST with a JSON Lines (NDJSON) body — one JSON object per line. Required headers
HeaderValue
x-bronto-api-keyYour Bronto API key
Content-Typeapplication/json
Recommended headers These headers control how your data is organized in Bronto. See Data Organization for how datasets, collections, and tags work.
HeaderDescription
x-bronto-datasetDataset to ingest into
x-bronto-collectionCollection name
x-bronto-tagsComma-separated tags to attach to events
Other headers
HeaderDescription
Content-EncodingCompression format: gzip, zstd, or deflate
Event fields
FieldRequiredDescription
messageYesThe log content string
timestampRecommendedISO 8601 timestamp — e.g. 2024-01-15T10:30:00Z

Examples

Uncompressed
curl -X POST https://ingestion.eu.bronto.io \
  -H "x-bronto-api-key: <YOUR_API_KEY>" \
  -H "Content-Type: application/json" \
  --data-binary '{"timestamp":"2024-01-15T10:30:00Z","message":"user login successful","user":"alice"}
{"timestamp":"2024-01-15T10:30:01Z","message":"request completed","status":200}'
gzip
echo '{"timestamp":"2024-01-15T10:30:00Z","message":"user login successful"}' | \
  gzip | \
  curl -X POST https://ingestion.eu.bronto.io \
    -H "x-bronto-api-key: <YOUR_API_KEY>" \
    -H "Content-Type: application/json" \
    -H "Content-Encoding: gzip" \
    --data-binary @-
Zstandard
echo '{"timestamp":"2024-01-15T10:30:00Z","message":"user login successful"}' | \
  zstd | \
  curl -X POST https://ingestion.eu.bronto.io \
    -H "x-bronto-api-key: <YOUR_API_KEY>" \
    -H "Content-Type: application/json" \
    -H "Content-Encoding: zstd" \
    --data-binary @-
Zstandard offers the best compression ratio and is recommended for high-volume pipelines.

Batch processing

For large files, split into chunks and send each one:
Batch ingest from file
#!/bin/bash
FILE="logs.ndjson"
BATCH_SIZE=5000
API_KEY="<YOUR_API_KEY>"
ENDPOINT="https://ingestion.eu.bronto.io"

split -l $BATCH_SIZE "$FILE" /tmp/batch_

for batch in /tmp/batch_*; do
  gzip -c "$batch" | curl -s -X POST "$ENDPOINT" \
    -H "x-bronto-api-key: $API_KEY" \
    -H "Content-Type: application/json" \
    -H "Content-Encoding: gzip" \
    --data-binary @-
  echo "Sent $batch"
done

rm /tmp/batch_*

Compression ratios

Compressing payloads before sending reduces transfer size significantly. Typical ratios for CDN logs:
AlgorithmContent-Encoding valueTypical ratio
gzipgzip~7.8%
Zstandardzstd~6.0%
Deflatedeflate~7.8%

Payload limits

Payload typeMaximum size
Uncompressed10 MB
Compressed (wire size)10 MB
Exceeding these limits returns an HTTP 413.

Response codes

CodeMeaning
200Events ingested
400Malformed request or invalid JSON
401Invalid or missing API key
413Payload exceeds size limit
429Rate limit or quota exhausted
500Server error
A 200 response means your data has been securely stored — no data will be lost. If the API returns an error, no data from that request is ingested.