Documentation Index
Fetch the complete documentation index at: https://docs.bronto.io/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Bronto exposes two distinct endpoint types. Using the wrong one is the most common cause of 400 errors — confirm which applies to your agent before configuring.
We recommend routing logs and traces through a log forwarder (Fluent Bit, Logstash, Vector) or the OpenTelemetry Collector rather than sending HTTP requests directly. Forwarders and the collector handle batching, compression, and retries on failure — improving reliability and reducing cost — and can parse or transform common log formats before they reach Bronto. Direct HTTP is supported for cases where running an agent isn’t practical.
| Endpoint | Accepts | Typical agents |
|---|
ingestion.eu.bronto.io / ingestion.us.bronto.io | Any format — JSON, syslog, logfmt, CEF, raw text | Fluent Bit, Logstash, FireLens, EventBridge, custom HTTP |
ingestion.eu.bronto.io/v1/logs / ingestion.us.bronto.io/v1/logs | OTLP protobuf only | OpenTelemetry Collector |
ingestion.eu.bronto.io/v1/traces / ingestion.us.bronto.io/v1/traces | OTLP protobuf only | OpenTelemetry Collector |
All endpoints authenticate with the same header:
x-bronto-api-key: <YOUR_API_KEY>
See API Keys for how to generate a key.
Base Endpoint
The base endpoint (no path) accepts any payload format and performs no schema validation. It is the right choice for any agent that sends JSON, plain text, or structured log lines over HTTP.
For the following formats, Bronto detects and extracts fields automatically — no configuration required.
| Format | Fields extracted |
|---|
| Structured JSON (Logrus, Winston, etc.) | All top-level key-value fields |
| Nested JSON | Nested objects flattened to dot notation — e.g. context.user, context.request_id |
| RFC 5424 Syslog | pri, facility, severity, hostname, appname, procid, msgid, timestamp, version |
| Logfmt | All key=value pairs — e.g. level, msg, user, duration. Also parsed when logfmt appears as the value of a JSON message field |
| GELF | version, host, short_message, level, and underscore-prefixed custom fields |
| Log4j2 JSON | level, loggerName, message, thread, instant.epochSecond, instant.nanoOfSecond, nested thrown fields |
| CEF | CEF version and extension key-value pairs — e.g. src, dst, suser |
Other formats — RFC 3164 BSD syslog, Apache Common Log, Docker JSON, raw text — are accepted and stored as-is. Most log forwarders and the OpenTelemetry Collector can parse or transform these into structured fields before sending. If you are sending data directly, use the custom parser to define extraction rules.
OTLP Endpoints
The /v1/logs and /v1/traces endpoints accept OTLP protobuf only over HTTPS on port 443. For trace ingestion, the expected path is through the OpenTelemetry Collector — it handles OTLP protobuf serialisation, batching, and retry automatically. Sending traces without a collector is unsupported.
Trace data must go to /v1/traces. Sending traces to the base endpoint stores them as log events — trace correlation and the traces UI will not work.
Direct HTTP Custom Ingestion
If running a log forwarder or collector isn’t practical, you can POST log events directly to the base endpoint from any HTTP client or script.
All requests must be POST with a JSON Lines (NDJSON) body — one JSON object per line.
Required headers
| Header | Value |
|---|
x-bronto-api-key | Your Bronto API key |
Content-Type | application/json |
Recommended headers
These headers control how your data is organized in Bronto. See Data Organization for how datasets, collections, and tags work.
| Header | Description |
|---|
x-bronto-dataset | Dataset to ingest into |
x-bronto-collection | Collection name |
x-bronto-tags | Comma-separated tags to attach to events |
Other headers
| Header | Description |
|---|
Content-Encoding | Compression format: gzip, zstd, or deflate |
Event fields
| Field | Required | Description |
|---|
message | Yes | The log content string |
timestamp | Recommended | ISO 8601 timestamp — e.g. 2024-01-15T10:30:00Z |
Examples
curl -X POST https://ingestion.eu.bronto.io \
-H "x-bronto-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
--data-binary '{"timestamp":"2024-01-15T10:30:00Z","message":"user login successful","user":"alice"}
{"timestamp":"2024-01-15T10:30:01Z","message":"request completed","status":200}'
echo '{"timestamp":"2024-01-15T10:30:00Z","message":"user login successful"}' | \
gzip | \
curl -X POST https://ingestion.eu.bronto.io \
-H "x-bronto-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-H "Content-Encoding: gzip" \
--data-binary @-
echo '{"timestamp":"2024-01-15T10:30:00Z","message":"user login successful"}' | \
zstd | \
curl -X POST https://ingestion.eu.bronto.io \
-H "x-bronto-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-H "Content-Encoding: zstd" \
--data-binary @-
Zstandard offers the best compression ratio and is recommended for high-volume pipelines.
Batch processing
For large files, split into chunks and send each one:
#!/bin/bash
FILE="logs.ndjson"
BATCH_SIZE=5000
API_KEY="<YOUR_API_KEY>"
ENDPOINT="https://ingestion.eu.bronto.io"
split -l $BATCH_SIZE "$FILE" /tmp/batch_
for batch in /tmp/batch_*; do
gzip -c "$batch" | curl -s -X POST "$ENDPOINT" \
-H "x-bronto-api-key: $API_KEY" \
-H "Content-Type: application/json" \
-H "Content-Encoding: gzip" \
--data-binary @-
echo "Sent $batch"
done
rm /tmp/batch_*
Compression ratios
Compressing payloads before sending reduces transfer size significantly. Typical ratios for CDN logs:
| Algorithm | Content-Encoding value | Typical ratio |
|---|
| gzip | gzip | ~7.8% |
| Zstandard | zstd | ~6.0% |
| Deflate | deflate | ~7.8% |
Payload limits
| Payload type | Maximum size |
|---|
| Uncompressed | 10 MB |
| Compressed (wire size) | 10 MB |
Exceeding these limits returns an HTTP 413.
Response codes
| Code | Meaning |
|---|
200 | Events ingested |
400 | Malformed request or invalid JSON |
401 | Invalid or missing API key |
413 | Payload exceeds size limit |
429 | Rate limit or quota exhausted |
500 | Server error |
A 200 response means your data has been securely stored — no data will be lost. If the API returns an error, no data from that request is ingested.