[ GUIDE ]

Laravel API latency

What latency targets make sense for JSON APIs, per-endpoint tracking, and trace-based bottleneck hunting.

QUICK ANSWER

What's a good Laravel API latency target and how do I measure it?

For UI-blocking API endpoints target p95 under 100ms (excellent) or 200ms (acceptable). Read endpoints under 200ms, write endpoints under 500ms. Measure by grouping requests by route pattern (/api/orders/{id}) and computing percentiles per endpoint. Use a per-request trace view to find which span (DB query, external HTTP, cache call) ate the budget on slow instances. Laravel's nightwatch package captures all this; NightOwl and Nightwatch Cloud surface it per-endpoint.

Updated · 2026-04-13

Latency targets by endpoint class

Endpoint type Good p95 Acceptable Problem
Auth (login, token refresh)< 100ms100-200ms> 200ms
UI-blocking read< 100ms100-200ms> 300ms
Search / filter< 200ms200-400ms> 500ms
Write (create / update)< 300ms300-500ms> 800ms
Background receipt (webhook)< 500ms500ms-1s> 2s

Why API latency is tighter than page latency

Three reasons:

  1. No perceptual masking. A browser showing a loading spinner at 500ms feels fine. A mobile app waiting on your API at 500ms feels janky.
  2. Chained calls. A single screen in a client app often fires 3-5 API calls. Each at 200ms compounds to 600-1000ms perceived latency.
  3. Consumers can't batch for you. A web server can render server-side and send one response; a consumer SDK makes calls one at a time unless you offer batching endpoints.

Middleware for API-specific timing context

app/Http/Middleware/ApiTiming.php

php
use Closure;
use Illuminate\Http\Request;

class ApiTiming
{
    public function handle(Request $request, Closure $next)
    {
        $start = hrtime(true);
        $response = $next($request);
        $durationMs = (hrtime(true) - $start) / 1e6;

        $response->headers->set('Server-Timing', "total;dur={$durationMs}");

        return $response;
    }
}

The Server-Timing header is readable by consumer dev tools (Chrome DevTools renders it in the Network panel). Good for consumer-side debugging without exposing full trace data.

Per-endpoint trace drilldown

For a slow endpoint:

  1. Open the requests dashboard, filter to the route pattern
  2. Sort by duration descending — find representative slow requests
  3. Open a slow request's trace view
  4. Identify the dominant span — usually a DB query or outgoing HTTP call
  5. Fix that one thing (eager load, add index, move to async) and watch p95 drop

See our related guides: slow query monitoring, N+1 detection, outgoing HTTP tracking.

Budget for network

A consumer in Europe calling your US-East API has 80-120ms of fixed network RTT. That's budget you can't compress with code. If your end-to-end API SLO is 300ms and network is 100ms, your server-side budget is 200ms. Plan accordingly.

THE EASY WAY

Per-endpoint p95 with trace drilldown

NightOwl groups API requests by route pattern with p95 / p99 per endpoint. Click any endpoint to see its slowest requests; click a request to see its spans. Data in your PostgreSQL, from $5/month flat.

bash
composer require nightowl/agent
php artisan nightowl:install

Frequently asked questions

What's a good latency target for a Laravel JSON API?

Tighter than page renders because consumer SDKs expect snappy responses. UI-blocking API endpoints: p95 under 100ms is excellent, 200ms acceptable. Read-heavy endpoints: under 200ms p95. Write endpoints: under 500ms. Above 500ms for any read endpoint feels slow to consumer-side developers, especially in mobile and SPA clients with their own UX latency budgets.

Why is API latency different from web page latency?

Consumers are machines, not browsers. A machine doesn't render anything while waiting; it just blocks. API calls are often chained (one SDK call triggers 3 more), so latency compounds. Also API latency reflects straight to consumer UX — a slow /api/orders lookup is a slow order page on their frontend.

How do I measure API latency per endpoint?

Your APM groups requests by route pattern (/api/orders/{id}) rather than raw URL. Track p50, p95, p99 per endpoint over time. For Laravel with the nightwatch package, routes are grouped automatically and percentiles computed from exact per-request durations. See our slow endpoints and p95 latency guides for specifics.

Should I include network latency in my API latency budget?

Depends on who owns the budget. For API SLO purposes — what you promise consumers — measure from the consumer's perspective (includes network). For internal bottleneck hunting, measure server-side only (where your code controls). Budget: allocate 100ms for network (variable), the rest for server-side work.

How do I find which query in a slow API endpoint is the culprit?

Per-request trace view. Any APM that records per-span detail (DB queries, cache calls, outgoing HTTP) shows you exactly which span ate the latency budget. A /api/orders/{id} at p95 2s with 1.8s spent in SELECT FROM orders JOIN order_items is a clear N+1 or missing index. NightOwl's request drilldown renders this waterfall.

Does API versioning affect latency monitoring?

Yes — version-prefix routes (/v1/orders vs /v2/orders) group separately in APM dashboards. If you're rolling out a new version behind a flag, you can compare v1 vs v2 p95 directly to catch regressions before full cutover.

How do I monitor API latency from the consumer side?

Synthetic monitoring — a tool that hits your API on a schedule from multiple regions and records response times. Postman API Observability, Better Uptime (API checks), and UptimeRobot all do this. Internal server-side APM tells you server-side performance; synthetic monitoring tells you consumer-observed performance including network.

PRICING

Flat pricing. No event caps. No per-seat fees.

14-day free trial, no credit card. Your PostgreSQL, your data.

HOBBY

$5 /month

1 app · 14 days lookback · all Laravel events

TEAM

$15 /month

Up to 3 connected apps · unlimited environments · all Laravel events

AGENCY

$69 /month

Unlimited apps · unlimited agent instances · same flat rate at any traffic

Related