[ GUIDE ]

Laravel memory usage per request

How to measure peak memory, find the routes that spike, and fix the patterns that burn your PHP memory_limit.

QUICK ANSWER

How do I measure memory usage per Laravel request?

Use memory_get_peak_usage(true) in a terminate middleware to capture the full request's peak in bytes. Record alongside route pattern, controller action, and response time, then aggregate by route to find hot spots. Typical Laravel API requests sit at 8-24 MB; page renders at 16-48 MB; routes consistently above 100 MB need attention. Laravel's nightwatch package records memory per request automatically.

Updated · 2026-04-13

Measuring peak memory per request

PHP exposes two functions:

  • memory_get_usage(true) — current memory allocated to the script, in bytes
  • memory_get_peak_usage(true) — high-water mark since script start, in bytes

Capture the peak in a terminate middleware — runs after the response is sent:

app/Http/Middleware/RecordMemoryUsage.php

php
use Closure;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Log;

class RecordMemoryUsage
{
    public function handle(Request $request, Closure $next)
    {
        return $next($request);
    }

    public function terminate(Request $request, $response): void
    {
        $peakMb = round(memory_get_peak_usage(true) / 1024 / 1024, 2);

        if ($peakMb > 80) {
            Log::warning('High-memory request', [
                'route' => $request->route()?->uri(),
                'method' => $request->method(),
                'peak_mb' => $peakMb,
                'user_id' => auth()->id(),
            ]);
        }
    }
}

Common memory-heavy patterns

Unbounded eager loading

php
// Bad — materializes users × posts × comments × authors in memory
$users = User::with('posts.comments.author')->get();

// Better — paginate and process in chunks
User::with('posts.comments.author')->chunkById(100, function ($users) {
    foreach ($users as $user) {
        // ...
    }
});

// Best — lazy loading if you don't need everything at once
foreach (User::cursor() as $user) {
    // Loads one row at a time
}

Reading large files into memory

php
// Bad — whole file in memory
$content = file_get_contents(storage_path('big-export.csv'));

// Better — stream line by line
$handle = fopen(storage_path('big-export.csv'), 'r');
while (($line = fgets($handle)) !== false) {
    // ...
}
fclose($handle);

// Laravel wrapper — stream a response
return response()->stream(function () {
    foreach (User::cursor() as $user) {
        echo $user->id . "\n";
    }
}, 200, ['Content-Type' => 'text/plain']);

Blade rendering huge collections

@foreach (\$rows as \$row) in a Blade view with 50,000 rows renders every row to an in-memory string before sending the response. Paginate on the server side, not in Blade.

Finding your memory hot spots

Aggregate peak memory per route pattern and sort by p95. The routes at the top are where to focus. Typical targets to investigate:

  • Any route with p95 peak > 64 MB
  • Routes whose p95 peak has grown 2x+ over the last 30 days (data growth outpacing pagination)
  • Routes that hit memory_limit occasionally (shows up as 500 errors with Allowed memory size of X bytes exhausted)

Octane is different

If you're on Octane (Swoole, RoadRunner, FrankenPHP) the worker process persists across requests. Memory-per-request is less meaningful than memory growth over the worker's lifetime. See the Octane monitoring guide for the specifics.

THE EASY WAY

NightOwl records peak memory per request automatically

Every request recorded includes peak memory alongside route, duration, and queries. The requests dashboard aggregates by route with p95 memory per pattern — the hot spots surface themselves. No middleware to write.

bash
composer require nightowl/agent
php artisan nightowl:install

From $5/month flat. Data in your PostgreSQL.

Frequently asked questions

How do I measure memory usage per Laravel request?

PHP's memory_get_peak_usage(true) returns the high-water mark of allocated memory for the current process, in bytes. Record it in a middleware that runs after the response is produced to capture the full request's peak. The 'true' flag includes internal PHP allocations, not just userland arrays — which is what you actually care about.

Why is Laravel memory usage high on some routes?

The top three culprits: (1) unbounded Eloquent eager loading — loading User::with('posts.comments.author') across 1,000 users materializes hundreds of thousands of objects, (2) large file uploads held in memory — use streaming uploads, (3) Blade template rendering with large collections — pagination or lazy loading fixes it. Profile with a per-request memory recorder before guessing.

What's a normal memory footprint for a Laravel request?

Rough ranges on PHP 8.2+: API endpoints 8-24 MB, page renders 16-48 MB, file-upload or export endpoints 32-256 MB depending on payload size. PHP's default memory_limit is 128M — requests consistently near or above 100M need optimization even if they complete. Octane changes these numbers (memory accumulates across requests in a worker).

How do I find which Laravel route causes the most memory pressure?

Record peak memory per request with the route pattern. Aggregate by route and sort by p95 peak memory. The top of that list is where to focus. If you don't have an APM doing this, a middleware that writes route + memory_get_peak_usage to a dedicated table gets you there in an afternoon.

How do I fix Laravel memory leaks in long-running processes?

In traditional PHP-FPM setups, each request gets a fresh process — 'leaks' aren't usually a problem because the process dies. In Octane, Swoole, or queue workers, memory accumulates. Fix: (1) avoid static caches that grow unbounded, (2) chunk large Eloquent queries with chunkById() instead of get()->each(), (3) detach models with DB::disconnect() in long jobs, (4) restart workers periodically (Horizon does this for you).

Does Laravel Octane change memory monitoring?

Yes. Octane keeps workers alive across requests, so memory-per-request is less meaningful — you care about memory growth over a worker's lifetime. Monitor memory_get_usage() at request start and end; a persistently growing delta indicates a leak. Octane's max_requests setting recycles workers periodically as a safety net.

What's memory_get_peak_usage vs memory_get_usage?

memory_get_usage() returns current memory allocated; memory_get_peak_usage() returns the high-water mark since the script started. For per-request monitoring you want peak — current is only useful for detecting leaks in long-running workers where peak resets per request.

Should I alert on Laravel memory spikes?

Yes — but at the right granularity. Don't alert on every 100MB request. Alert when p95 peak memory for a specific route pattern exceeds a baseline by 2x, sustained over 5+ minutes. Memory spikes are usually correlated with specific inputs (large exports, user-submitted files) and warrant investigation even when requests complete.

PRICING

Flat pricing. No event caps. No per-seat fees.

14-day free trial, no credit card. Your PostgreSQL, your data.

HOBBY

$5 /month

1 app · 14 days lookback · all Laravel events

TEAM

$15 /month

Up to 3 connected apps · unlimited environments · all Laravel events

AGENCY

$69 /month

Unlimited apps · unlimited agent instances · same flat rate at any traffic

Related