[ GUIDE ]

How to Monitor Slow Queries in Laravel Production

Three layers — database slow logs, Laravel query hooks, and a proper APM — and when each is the right tool.

QUICK ANSWER

How do I find slow queries in Laravel?

Use Laravel's DB::whenQueryingForLongerThan() to catch long requests in-app, enable your database's slow query log (MySQL's slow_query_log or Postgres's log_min_duration_statement) as a safety net, and install an APM that groups queries by normalized fingerprint and tracks p95 duration per pattern. NightOwl captures every query per request with full trace context.

Updated · 2026-04-13

Layer 1 — Database slow-query log

Your database already knows which queries were slow. Turn the log on as a baseline.

PostgreSQL — postgresql.conf

ini
log_min_duration_statement = 500  # log anything over 500ms
log_line_prefix = '%m [%p] %q%u@%d '
log_statement = 'none'

MySQL — my.cnf

ini
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 0.5
log_queries_not_using_indexes = 1

Cheap, reliable, catches everything. Missing: request context. You get the SQL but not which controller fired it or what the user was doing. Useful for capacity planning and index design; not actionable for "the /orders page is slow."

Layer 2 — Laravel query hooks

Laravel gives you two hooks in AppServiceProvider::boot().

Fire once per request when total query time exceeds 500ms

php
use Illuminate\Support\Facades\DB;

DB::whenQueryingForLongerThan(500, function ($connection, $event) {
    logger()->warning('Slow request — SQL queries exceeded 500ms', [
        'url' => request()->fullUrl(),
        'method' => request()->method(),
        'user_id' => auth()->id(),
    ]);
});

Fire on every query (expensive — sample in production)

php
DB::listen(function ($query) {
    if ($query->time < 100) return;

    logger()->warning('Slow query', [
        'sql' => $query->sql,
        'bindings' => $query->bindings,
        'time_ms' => $query->time,
        'connection' => $query->connectionName,
    ]);
});

DB::listen runs for every query — at 1,000 req/sec with 10 queries per request, that's 10,000 callback invocations per second. Keep the callback cheap (no DB writes, no network calls) and prefer the whenQueryingForLongerThan variant for production-facing logic.

Layer 3 — Full APM with pattern grouping

The real unlock is grouping queries by fingerprint. A raw log says "this query took 800ms at 14:23." Pattern grouping says "this query pattern has fired 840,000 times today with a p95 of 640ms — up from 90ms yesterday."

Fingerprinting normalizes bindings:

sql
-- Raw SQL:
SELECT * FROM orders WHERE user_id = 742 AND status = 'paid' ORDER BY created_at DESC LIMIT 10;
SELECT * FROM orders WHERE user_id = 891 AND status = 'paid' ORDER BY created_at DESC LIMIT 10;

-- Fingerprint (both roll up into one):
SELECT * FROM orders WHERE user_id = ? AND status = ? ORDER BY created_at DESC LIMIT ?;

Options that do this out of the box:

  • Laravel Nightwatch Cloud — official
  • NightOwl — BYOD Postgres dashboard on the same Nightwatch instrumentation
  • Sentry / Scout / New Relic — generic APMs, weaker at Laravel-specific views

What to actually fix first

Sorted by impact:

  1. Queries fired 1,000x+ per request — N+1s. See our N+1 guide.
  2. Patterns with high call count and mid-range p95 — 0.5M calls at 150ms p95 is 75,000 seconds of DB time per day. Missing index, usually.
  3. Individually slow queries (p95 > 1s) — often full table scans or large OFFSETs. Use EXPLAIN ANALYZE to see the plan.
  4. Queries with growing duration trend — healthy yesterday, slow today. Usually the table grew past an index's sweet spot.

THE EASY WAY

NightOwl groups every query by pattern with p95 trending

NightOwl fingerprints every SQL statement, rolls up by pattern, and tracks count + p95 duration over any time range. Drill into a pattern to see the exact requests and bindings. All the instrumentation is the official laravel/nightwatch package — zero runtime impact on request path.

bash
composer require nightowl/agent
php artisan nightowl:install

Data stays in your PostgreSQL. From $5/month flat.

Frequently asked questions

How do I find slow queries in Laravel?

Three layers from cheapest to best: (1) your database's slow-query log (MySQL slow_query_log, Postgres log_min_duration_statement) catches everything at the DB level, (2) Laravel's DB::listen() or whenQueryingForLongerThan() hook lets you log slow queries in-app with request context, (3) an APM like NightOwl that records every query per request with grouping and p95 trending.

How do I log Laravel queries that take longer than N ms?

Use DB::whenQueryingForLongerThan(500, fn($connection, $event) => logger()->warning(...)) in AppServiceProvider. This fires once per request when the cumulative query time exceeds the threshold. For per-query logging, use DB::listen() but note it fires for every query and is expensive at high traffic.

What's the difference between the MySQL slow query log and Laravel-level monitoring?

The MySQL slow query log captures raw SQL with execution time — no request context, no controller, no user. Laravel-level monitoring (DB::listen, APMs) ties queries back to the request that fired them, which is what you need to actually fix the problem. Use both: MySQL's log as a safety net, Laravel-level for actionable insight.

How do I monitor query patterns, not just individual queries?

Group queries by their normalized fingerprint — the SQL text with parameter values replaced by placeholders. Ten thousand SELECT * FROM users WHERE id = ? executions share one fingerprint and roll up into aggregate metrics (count, p95 duration, error rate). NightOwl and Laravel Nightwatch Cloud both do this grouping automatically.

Can I use EXPLAIN on slow queries programmatically?

Yes — DB::select('EXPLAIN ANALYZE '.$sql, $bindings) runs Postgres's query planner (or MySQL's equivalent) and returns the plan. The hard part is capturing the slow query with its bindings in the first place. Slow-query logs capture SQL but not always bindings; DB::listen captures both. Store the plan alongside the slow-query record for later analysis.

What's a reasonable threshold for 'slow' in Laravel?

Anything over 100ms per query deserves attention; anything over 500ms is usually a real problem. But thresholds depend on the endpoint — a 200ms query is fine in a nightly export job and catastrophic in a page render. Measure p95 duration per query pattern, not single-query thresholds.

How do I monitor queries without hitting performance?

Sample. A 10% sample of queries gives statistically significant data at 10x lower overhead. DB::listen runs on every query and is expensive — APMs that buffer and sample asynchronously (like NightOwl's ReactPHP agent) impose negligible overhead. Avoid writing to your primary DB from DB::listen callbacks.

PRICING

Flat pricing. No event caps. No per-seat fees.

14-day free trial, no credit card. Your PostgreSQL, your data.

HOBBY

$5 /month

1 app · 14 days lookback · all Laravel events

TEAM

$15 /month

Up to 3 connected apps · unlimited environments · all Laravel events

AGENCY

$69 /month

Unlimited apps · unlimited agent instances · same flat rate at any traffic

Related