Skip to main content
Use Tracing to find slow or failing operations, compare services, and drill from an operation into the traces and logs behind it.

Open Tracing

In Bronto, select Tracing from the left navigation. Bronto Tracing overview with filters, latency charts, error charts, request volume, and the operation table The Tracing page shows:
  • filters for services, operations, SQL conditions, duration, and time range
  • summary charts for latency, errors, and request volume
  • a table of operations with trace count, error rate, latency, and recent activity
If the page is empty, first confirm that your services are sending traces. See Send Traces to Bronto.

Filter traces

Use the filters at the top of the page to narrow the operations shown in the table.
FilterUse it to
ServicesInclude one or more emitting services, such as frontend-web, cart, or recommendation
OperationsInclude one or more operation names, such as GET, POST, or an RPC method
SQL filterFilter by trace and span fields with expressions such as status=200 AND method='GET'
DurationFocus on traces in a latency band, such as < 10ms, 100ms - 1s, or > 10s
Time rangeLimit the view to a recent window, such as the last 10 minutes
The Services and Operations filters support multiple selections, so you can compare related services or focus on a single endpoint.

Read the overview charts

The charts above the table summarize the currently selected trace set.
ChartWhat it shows
Latency Over Timep50, p95, and p99 latency for matching traces
Errors Over Timeerror percentage over the selected time range
Requestsrequest volume over time
Use these charts to spot spikes before drilling into the operation table. For example, a latency spike with steady request volume usually points to slow dependencies, while an error spike may point to a failing route or backend service.

Use status filters

The Tracing page includes quick controls for common status views:
  • Error narrows the view to failing traces.
  • OK narrows the view to successful traces.
Use Error when investigating failures, then clear it to compare failing traces with normal traffic.

Search operations

Use Filter operations above the table to quickly find an operation by name. This is useful when a service emits many operations and you already know the route, RPC method, or job name you want to inspect.

Read the operation table

The operation table groups trace activity by operation.
ColumnMeaning
OperationThe route, RPC method, job name, or manually named span operation
ServicesServices that emitted traces for the operation
TracesNumber of matching traces and recent trace volume trend
ErrorsError percentage and failing trace count
LatencyLatency summary for matching traces, including percentile values
Sort the table by trace count, errors, or latency to find the busiest, most broken, or slowest operations first.

Drill into traces

Select an operation row to open the traces behind it. From a trace detail view, use the available trace and span context to inspect timing, errors, attributes, and related logs. This is where you move from “which operation is slow or failing?” to “which span or dependency caused it?”

Common investigation flows

Find a slow endpoint
  1. Set the time range to the period where users reported slowness.
  2. Use the Services filter to choose the affected service.
  3. Sort by latency.
  4. Open the slow operation and inspect the longest traces.
Find a failing operation
  1. Select the Error filter.
  2. Sort the table by error rate.
  3. Open an operation with a high error percentage.
  4. Inspect failing spans and related logs.
Compare related services
  1. Select multiple services from the Services filter.
  2. Use the operation table to compare trace volume, error rate, and latency.
  3. Use SQL filters to narrow the comparison to a method, status, customer, or other indexed field.