Why Reverb doesn't fit normal APM
Reverb is a long-running ReactPHP process serving WebSocket connections. It doesn't handle HTTP requests in the usual sense. APMs that record per-request telemetry see the HTTP-side broadcast dispatches (when your Laravel app calls broadcast()) but not the WebSocket-side message delivery, connections, or per-client state.
You need two layers:
- HTTP side (dispatch) — your normal APM catches
broadcast()calls and their context - Reverb worker side (delivery) — process-level monitoring: memory, connections, throughput, errors
Reverb's built-in /stats endpoint
config/reverb.php
'apps' => [
[
'app_id' => env('REVERB_APP_ID'),
'key' => env('REVERB_APP_KEY'),
'secret' => env('REVERB_APP_SECRET'),
// Enable the stats endpoint
'path' => '',
],
],
// Scrape stats via HTTP:
// GET /apps/{app_id}/stats?auth_key=...&auth_signature=...Returns JSON with active connections, messages sent, memory, channels subscribed. Prometheus-scrape this or forward to your APM periodically.
Detecting memory leaks
Reverb workers are long-lived. Leaks accumulate slowly until the process OOMs or degrades:
Supervisor config with memory cap
[program:reverb]
process_name=%(program_name)s
command=php /home/forge/app.com/artisan reverb:start
autostart=true
autorestart=true
user=forge
stdout_logfile=/home/forge/.forge/reverb.log
stopwaitsecs=3600
; Kill + restart if memory exceeds 512MB
; Supervisor doesn't enforce memory directly; use a wrapper script
; or a Prometheus alert + manual restart.
Alternative: run Reverb with --max-requests=N if supported in your Reverb version, or use a cron that restarts the process nightly as a leak safety net.
Tracking broadcast dispatch from Laravel
The HTTP-side of broadcasting fires Laravel events you can hook:
app/Providers/EventServiceProvider.php
use Illuminate\Broadcasting\BroadcastEvent;
use Illuminate\Support\Facades\Event;
Event::listen(function (BroadcastEvent $event) {
logger()->info('Broadcasting', [
'event' => get_class($event->event),
'channels' => $event->channels,
]);
});Scaling Reverb horizontally
Multiple Reverb processes coordinate via Redis pub/sub. Monitor both:
- Redis pub/sub channel subscribers (should match Reverb process count)
- Redis memory (Reverb's in-flight message buffer lives here)
- Per-process connection distribution (load balancer should spread evenly)
Alerting thresholds
| Signal | Alert threshold |
|---|---|
| Active connections per process | > 80% of max_connections |
| Memory per process | > 2x baseline, sustained 1h |
| Broadcast error rate | > 1% |
| Worker crashes | any, investigate |
| Connection reset spikes | > 3x baseline |
RELATED
- Laravel memory usage per request
- Laravel Octane monitoring — similar persistent-worker concerns