Skip to content
Harshit Purwar
Go back

Lighthouse Performance Metrics to Improve User Experience

Performance is the key to successful user retention. Even a second of page load delay diminishes user satisfaction and brand perception. With Google’s growing emphasis on Core Web Vitals, we decided to prioritize Lighthouse optimization across our front-end portfolio — 40+ applications, including 10+ monolithic apps and 30+ single-page applications serving millions of concurrent users.

Our landing route alone handled 0.7 million weekly visits and 3 million monthly traffic. Any regression could directly impact business metrics, so we needed a careful, incremental approach.

The Four Metrics That Matter

We identified the four metrics that cover 80% of the Lighthouse 8 scoring weight:

MetricWeightageTargetPurpose
First Contentful Paint (FCP)10%1.8sInitial content visibility
Largest Contentful Paint (LCP)25%2.5sMain content load completion
Total Blocking Time (TBT)30%200msMain thread responsiveness
Cumulative Layout Shift (CLS)15%0.1Visual stability

We set a target Lighthouse score of 90+ across all pages and prioritized routes based on two factors: high traffic volume and low performance scores.

UX Benchmarks

Before diving into optimizations, we established clear benchmarks:

Optimizing First Contentful Paint (FCP)

FCP measures when the first piece of content becomes visible. The primary bottleneck here is render-blocking resources.

Optimizing Largest Contentful Paint (LCP)

LCP measures when the largest visible element finishes loading — typically a hero image or heading block. We focused on reducing resource load times:

Optimizing Total Blocking Time (TBT)

TBT measures how long the main thread is blocked and unable to respond to user input. Think of it like a grocery store — if the cashier (main thread) is busy when a customer arrives (user interaction), that customer leaves (user abandons the page).

Optimizing Cumulative Layout Shift (CLS)

CLS measures visual stability — unexpected layout shifts that frustrate users.

Challenges with Monoliths and SPAs

Monolithic applications (88% of our audited pages) posed the biggest risk. They serve millions of concurrent users, so any change had to be incremental and carefully validated.

Single-page applications required a different focus — optimizing the first load by reorganizing code to prevent unnecessary downloads, parallelizing JavaScript chunks, and prioritizing critical content for LCP.

Monitoring and Alerting

Optimization without monitoring is incomplete. We set up two layers of observability:

Elastic Real User Monitoring (RUM)

We deployed a JavaScript-based RUM agent across all applications to capture real user data:

This gave us dashboards showing real FCP, LCP, TBT, and CLS distributions — not just synthetic Lighthouse runs, but actual user experience data.

Grafana Alerting

To catch regressions post-deployment, we configured Grafana with latency-based thresholds across all applications. Alerts were piped to Slack via webhooks, ensuring the team was notified immediately when performance degraded.

Results

Over six months of systematic optimization across ~40 pages:

Key Takeaways

  1. Prioritize by impact — start with high-traffic, low-scoring routes
  2. Focus on the right metrics — FCP, LCP, TBT, and CLS cover 80% of the score
  3. Monitor with real user data — synthetic audits alone don’t tell the full story
  4. Alert on regressions — performance gains are meaningless if they silently erode after deployment
  5. Move incrementally — especially on monolithic apps serving production traffic

Share this post on:

Previous Post
Accessible Component APIs: What I Got Wrong
Next Post
How to Track Design System Adoption