[ GUIDE ]

How to find slow endpoints in Laravel production

From Nginx access logs to full per-request tracing — four layers, and how to choose.

QUICK ANSWER

How do I find the slowest endpoints in a Laravel app?

Enable request-time logging in your web server as a baseline, add a Laravel middleware that records controller action and duration, then install an APM that aggregates by route pattern with p95 trending. The critical move is grouping by route pattern (like /orders/{id}) rather than raw URL, and tracking p95 not average. NightOwl and Laravel Nightwatch Cloud do this aggregation automatically.

Updated · 2026-04-13

Layer 1 — Web server access logs

Nginx and Apache can record request duration. Free, always-on, zero Laravel code.

nginx.conf

nginx
log_format timing '$remote_addr - [$time_iso8601] '
                  '"$request" $status $body_bytes_sent '
                  'rt=$request_time uct="$upstream_connect_time" '
                  'uht="$upstream_header_time" urt="$upstream_response_time"';

access_log /var/log/nginx/access.log timing;

Sort the log by $request_time to find slow requests. What you're missing: controller name, user context, query breakdown, per-route aggregation. Fine for a quick baseline, useless for root cause.

Layer 2 — Middleware timing

A Laravel middleware records start/end time and the resolved route. You get controller-aware timing without a full APM.

app/Http/Middleware/MeasureRequestTime.php

php
use Closure;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Log;

class MeasureRequestTime
{
    public function handle(Request $request, Closure $next)
    {
        $start = microtime(true);
        $response = $next($request);
        $duration = (microtime(true) - $start) * 1000;

        if ($duration > 500) {
            Log::warning('Slow request', [
                'route' => $request->route()?->uri(),
                'controller' => $request->route()?->getActionName(),
                'method' => $request->method(),
                'duration_ms' => round($duration, 2),
                'status' => $response->getStatusCode(),
                'user_id' => auth()->id(),
            ]);
        }

        return $response;
    }
}

Register globally in bootstrap/app.php. Good for logging outliers. Missing: aggregation. A single slow request doesn't tell you much — you want to know which route pattern is consistently slow at p95.

Layer 3 — Route aggregation with p95

The real unlock is aggregating by normalized route pattern. One record per request, grouped by route?->uri(), with p95 and p99 percentiles computed over a time window.

Metrics to surface per route:

  • Request count
  • Error rate (5xx %)
  • p50 / p95 / p99 duration
  • Throughput (requests/min)
  • Average DB time / external HTTP time per request (requires instrumentation layer)

Layer 4 — Per-request trace correlation

Once you've identified /orders/{id} is slow at p95, the next question is why. You need per-request traces that break down the slow request into its component spans: DB queries, cache calls, external HTTP, view rendering.

The Laravel Nightwatch package records every span per request with a shared trace ID. NightOwl and Nightwatch Cloud both consume this data — you drill from slow route to slow request to slow query in three clicks.

Fix priorities — what to tackle first

  1. High-traffic routes with high p95 — biggest total pain. Often N+1s or missing indexes.
  2. Low-traffic routes with catastrophic tail — /api/export, /admin/report. Users don't hit them often, but when they do they hurt.
  3. Routes trending slower over time — data growth outpacing indexes. Profile the plan.
  4. Routes blocked on external HTTP — move to async/queued work. See the outgoing HTTP guide.
  5. Routes with high error rate AND high p95 — failing slowly, worst possible combination.

THE EASY WAY

NightOwl aggregates every request by route with p95 and full trace drilldown

The requests dashboard groups by route pattern with count, p95 / p99, and error rate. Click a route to see its slowest requests. Click a request to see every DB query, cache call, and HTTP span. All built on the official laravel/nightwatch instrumentation — zero impact on request path.

bash
composer require nightowl/agent
php artisan nightowl:install

From $5/month flat. Data in your PostgreSQL.

Frequently asked questions

How do I find the slowest routes in a Laravel app?

Three progressive options. (1) Enable access logs with request timing in your web server (Nginx's $request_time or Apache's %D) and grep for slow ones — free, coarse. (2) Use a middleware that records controller action + duration to your database or log channel — gives you per-route aggregation with a bit of code. (3) Install an APM that records every request with route, controller, duration, and status, grouped by pattern — this is what NightOwl and Laravel Nightwatch Cloud do. Grouping by route pattern (not raw URL) is the non-obvious bit.

What's the difference between a slow request and a slow endpoint?

A slow request is a single event — /api/orders/742 took 2.3s. A slow endpoint is an aggregate pattern — the /api/orders/{id} route averages 1.8s at p95 across the last 24 hours. Request-level tells you what happened; endpoint-level tells you what's broken. You need aggregation by route pattern, grouping /api/orders/742 and /api/orders/891 together.

Should I measure average or p95 latency for Laravel endpoints?

P95, always. Average is misleading — a route that's fast 99% of the time and catastrophically slow 1% of the time looks fine on average but causes the real user pain. P95 (95th percentile) tells you 'the slow 5% of requests take at least this long', which is what users actually perceive. P99 is stricter; useful for SLO tracking.

How do I monitor Laravel endpoint performance in production without affecting performance?

The instrumentation needs to be out-of-process and async. Writing timing data to your primary database from a middleware adds latency on every request. Writing to a separate log or a local socket that a worker reads is cheap. Laravel's nightwatch package uses an out-of-process TCP agent for exactly this reason — instrumentation overhead is under 1ms per request.

How can I correlate slow endpoints to slow database queries?

You need per-request trace context linking HTTP requests to DB queries. A middleware generates a trace ID; the query listener records it alongside SQL. When you drill into a slow /api/orders request, you see every query it fired. Laravel Telescope does this locally; NightOwl and Nightwatch Cloud do it in production with aggregation across traffic.

What counts as a 'slow' Laravel endpoint?

Depends on the endpoint. A page render at 500ms is OK, a webhook receiver at 500ms is awful, a background export at 500ms is excellent. As rough defaults: p95 under 200ms for UI render endpoints, under 100ms for API endpoints that block a UI, under 500ms for everything else. Set per-endpoint SLOs rather than a global threshold.

Do Nginx/Apache access logs help find slow endpoints?

They help — they're cheap and always on — but they miss context. You see that /orders/742 took 2s but not why. No controller name, no query breakdown, no user context. Use them as a baseline or for non-Laravel traffic (static assets, redirects), then layer APM on top for actionable drilling.

What are the most common causes of slow Laravel endpoints?

In descending frequency: N+1 queries (easily the #1 culprit), missing database indexes, synchronous external API calls in the request path, full-table scans on growing tables, and unbounded eager loads (with() everything). Less common but dramatic: cache misses on hot paths, over-eager blade partials, and serialization of huge Eloquent collections.

PRICING

Flat pricing. No event caps. No per-seat fees.

14-day free trial, no credit card. Your PostgreSQL, your data.

HOBBY

$5 /month

1 app · 14 days lookback · all Laravel events

TEAM

$15 /month

Up to 3 connected apps · unlimited environments · all Laravel events

AGENCY

$69 /month

Unlimited apps · unlimited agent instances · same flat rate at any traffic

Related