For detailed installation instructions check out the Fluentd documentation site.

Configurate FluentD

After installation, FluentD requires the output configuration to communicate with Bronto system.

<source>
  @type tail
  path /path/to/your/logs
  tag <OPTIONAL_LOG_TAG>
  refresh_interval 5s
  <parse>
    @type json
  </parse>
  pos_file /var/log/td-agent/buffer/fluentd.pos
</source>

<filter **>
  @type record_transformer
  <record>
    hostname "#{Socket.gethostname}"
  </record>
</filter>

<match **>
  @type http
  endpoint https://ingestion.<REGION>.bronto.io:443
  http_method post

  <buffer>
    @type file
    path /var/log/td-agent/buffer/http
    flush_interval 10s
    chunk_limit_size 5MB
    overflow_action block
  </buffer>

  <format>
    @type json
  </format>

  headers {"x-bronto-api-key": "<YOUR_API_KEY>","x-bronto-service-name": "<YOUR_SERVICE_NAME>","x-bronto-service-namespace": "<YOUR_SERVICE_NAMESPACE>"}
</match>
ParameterValueRequiredDescription
source.@typetailYesType of source plugin; here, it follows file contents and collects logs.
source.path/path/to/your/logsYesPath to the log files to be tailed.
source.tag<OPTIONAL_LOG_TAG>NoOptional tag for the log entry.
source.refresh_interval5sNoInterval for refreshing the file list to check for new logs.
source.parse{ "@type": "json" }YesSpecifies the parser type, here JSON format
source.pos_file/var/log/td-agent/buffer/fluentd.posYesPosition file to remember where it last read
filter.@typerecord_transformerYesType of filter plugin to transform the log records
filter.record{ "hostname": "#{Socket.gethostname}" }YesTransformation record to add the hostname to the log entry
match.@typehttpYesType of output plugin; here, to send logs via HTTP
match.endpointhttps://ingestion.<REGION>.bronto.io:443YesHTTP endpoint to send the log data
match.http_methodpostYesHTTP method to use when sending data
match.buffer.@typefileYesConfiguration for buffering before sending.
match.path"/var/log/td-agent/buffer/http"YesConfiguration for buffering before sending
match.chunk_limit_size5mbYesConfiguration for buffering before sending logs
match.flush_interval10sNoInterval for flushing data from the buffer
match.chunk_limit_size5MBNoMaximum size of a data chunk before it’s flushed
match.overflow_actionblockNoAction to take when the buffer queue is full
match.format{ "@type": "json" }YesSpecifies the format of data when being sent, here JSON format
match.headers{"x-bronto-api-key": "<YOUR_API_KEY>", "x-bronto-service-name": "<YOUR_SERVICE_NAME>", "x-bronto-service-namespace": "<YOUR_SERVICE_NAMESPACE>"}YesHeaders to be sent along with HTTP requests

Verify Log Collection

Once you have applied your configuration and restarted Fluent Bit, you can expect to see your log data being ingested to Bronto and accessible via the Search page.