[ GLOSSARY ]

What is the Apdex Score?

QUICK ANSWER

What is the Apdex score?

Apdex (Application Performance Index) is a 0 to 1 score summarizing user satisfaction with application responsiveness. You pick a threshold T — the latency under which users are 'satisfied.' Requests faster than T count fully, requests between T and 4T count as half, and requests slower than 4T count as zero. The formula is (satisfied + tolerating/2) / total.

Updated · 2026-04-13

The formula

text
         satisfied + (tolerating / 2)
Apdex = ──────────────────────────────
                  total

where:
  satisfied  = requests that completed in ≤ T
  tolerating = requests that completed in ≤ 4T but > T
  frustrated = requests that completed in > 4T (counted as 0)

If you pick T = 500ms, then a request that finishes in 300ms is satisfied, 1,500ms is tolerating, and 5,000ms is frustrated.

A worked example

10,000 requests over an hour with T = 500ms:

text
Satisfied  (≤ 500ms):    8,200 requests
Tolerating (500-2000ms): 1,500 requests
Frustrated (> 2000ms):     300 requests
Total:                  10,000 requests

Apdex = (8200 + 1500/2) / 10000
      = (8200 + 750) / 10000
      = 0.895

Verdict: "Good" (0.85 - 0.94 range)

Conventional Apdex ratings

Apdex score Rating
0.94 - 1.00Excellent
0.85 - 0.93Good
0.70 - 0.84Fair
0.50 - 0.69Poor
< 0.50Unacceptable

How to pick T

T should reflect what users find responsive for this class of request:

  • Plain API endpoints: 200-500ms
  • Web pages with render: 1-2s
  • Dashboard-style interactive views: 1-3s
  • Report generation, large data exports: 5-10s
  • Background async jobs: Apdex usually doesn't apply — use duration percentiles

Pick T per endpoint class. A single global T gives you a number but destroys the signal.

Why percentiles often beat Apdex

Apdex compresses two numbers (how many were fast, how many were frustrated) into one. That's useful for execs — "site health is 0.91" — but bad for debugging. An Apdex of 0.85 could mean "most requests are fast, a few are catastrophic" or "most requests are mediocre, none are catastrophic." These need different fixes.

Raw percentiles (p50, p95, p99) expose the shape of the distribution. See our p95 vs p99 explainer.

Apdex in a Laravel APM

Most APMs compute Apdex per route over a rolling window. NightOwl shows both Apdex and p50/p95/p99 side by side per route, so you can use Apdex for executive summaries and percentiles for debugging.

Frequently asked questions

What is a good Apdex score?

Industry convention: 1.0 is perfect, 0.94+ excellent, 0.85-0.94 good, 0.70-0.84 fair, 0.50-0.69 poor, below 0.50 unacceptable. Scores depend entirely on the threshold you pick for T — a 500ms T on an API and a 2s T on a dashboard produce very different numbers for the same underlying latency.

How do I pick the Apdex threshold T?

T should represent the latency under which users feel the app is responsive. For web APIs, 200-500ms. For interactive dashboards, 1-2s. For report generation, 5-10s. Pick per endpoint class, not globally — a single T across all routes muddies the signal.

Is Apdex still relevant in 2026?

It's fallen out of favor compared to raw percentiles (p95, p99) because Apdex compresses nuance into one number. It's still useful as a single-digit executive summary ('site health = 0.91') and for historical comparison. Most modern APMs show both Apdex and percentiles side by side.

How is Apdex different from p95 latency?

p95 tells you the latency for the 95th-percentile request — an absolute number in milliseconds. Apdex tells you the fraction of requests that were acceptable, weighted by a tolerance threshold. p95 gives you precision; Apdex gives you a comparable score across services. They answer different questions.

Why did NewRelic invent Apdex?

Apdex was invented in 2004 by the Apdex Alliance (NewRelic was a founding member) to give executives a simple 0-1 score for application quality. Before Apdex, APMs showed raw latency histograms — accurate but hard to compare across services or communicate to non-engineers.

PRICING

Flat pricing. No event caps. No per-seat fees.

14-day free trial, no credit card. Your PostgreSQL, your data.

HOBBY

$5 /month

1 app · 14 days lookback · all Laravel events

TEAM

$15 /month

Up to 3 connected apps · unlimited environments · all Laravel events

AGENCY

$69 /month

Unlimited apps · unlimited agent instances · same flat rate at any traffic