Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Last updated: January 28, 2026
Google’s Core Web Vitals (CWV) are a set of user‑experience metrics designed to quantify how real users experience the web. They are not abstract SEO scores, they are signals derived from how pages load, respond, and remain visually stable for users.
This guide is written as a practical, engineering‑led reference. Rather than focusing only on definitions or scores, it explains:
Throughout, the emphasis is on diagnosis and action, not just measurement.
Core Web Vitals are a subset of Google’s Page Experience signals. They focus on three aspects of user experience:
Google evaluates these using aggregated real‑user data over time. Individual page loads may vary, but Core Web Vitals reflect the overall experience users have in the real world.
What it measures: Loading experience
LCP measures how long it takes for the largest visible element (such as a hero image or main heading) to render within the viewport.
A slow LCP usually indicates server response delays, large media assets, or render‑blocking resources.
What it measures: Responsiveness
INP measures how quickly a page responds to user interactions, such as clicks or taps. It replaced First Input Delay (FID) as Google’s primary responsiveness metric.
Poor INP scores are commonly caused by heavy JavaScript execution or long tasks blocking the main thread.
What it measures: Visual stability
CLS measures how much the layout shifts unexpectedly during page load or interaction.
High CLS often results from images, fonts, or ads loading without reserved space.
Pages that load quickly, respond immediately, and remain visually stable are easier and more pleasant to use. Poor Core Web Vitals often correlate with frustration, misclicks, and abandonment.
Core Web Vitals are part of Google’s Page Experience signals. While they are not the sole ranking factor, consistently poor CWV performance can limit a page’s ability to compete in search results.
Performance issues frequently impact conversion rates, engagement, and retention. Improving Core Web Vitals often delivers benefits beyond SEO alone.
Core Web Vitals scores indicate how a page performed for users. They do not explain why it performed that way.
CWV scores are outcome metrics. They are useful for benchmarking and prioritisation, but meaningful improvements require diagnostic insight into what happens during page load and execution. This is where page speed monitoring and request waterfalls become essential.
Page speed monitoring provides repeatable tests that show how a page loads under controlled conditions. While page speed metrics are not identical to Core Web Vitals, they strongly correlate with them and are one of the most practical ways to identify performance bottlenecks.
For most teams, page speed monitoring is the fastest way to:
A page speed waterfall visualises every request made during page load and how long each takes. This turns performance issues into concrete, actionable problems.
A waterfall helps identify:
If the largest visual element appears late in the waterfall, it often explains a poor LCP score.
While INP is influenced by real user interactions, waterfalls often reveal contributing factors such as:
These patterns frequently correlate with responsiveness problems.
Waterfalls highlight resources that load late and cause layout shifts, including:
Once you understand where time is being spent during page load, the next step is deciding what to change. The table below maps each Core Web Vitals metric to common symptoms, likely causes, and the areas teams typically optimise.
| CWV metric | What you’ll see | Common causes | Where to look in the waterfall | Typical fixes |
|---|---|---|---|---|
| LCP | Main content appears late | Slow TTFB, large images, blocking CSS | Long initial request, late-loading hero asset | Image optimisation, caching, CSS prioritisation |
| INP | Page feels sluggish to interact | Heavy JS, long tasks, third-party scripts | Large JS bundles, long execution gaps | Code splitting, deferring scripts, reducing JS |
| CLS | Page jumps during load | Late fonts, ads, injected content | Resources loading after render | Reserve space, fix font loading, stabilise embeds |
This approach focuses on continuous improvement rather than chasing isolated metrics.
Many of these issues are visible immediately in a request waterfall.
No. Core Web Vitals are measured per UR. Different pages can have very different scores depending on content and complexity.
Yes. Google primarily evaluates Core Web Vitals using mobile user data, reflecting real‑world usage patterns.
Yes. Scores can change due to deployments, traffic patterns, infrastructure changes, or user behaviour.
No. Performance is an ongoing concern. Regular monitoring helps prevent regressions and maintain a good user experience.
Page speed monitoring tools, such as StatusCake, provide visibility into load behaviour and performance trends. Waterfall views make bottlenecks explicit, helping teams understand where changes will have the greatest impact on user experience.
The value lies not in the score itself, but in the ability to diagnose and improve performance over time.
Share this
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
Find out everything you need to know in our new uptime monitoring whitepaper 2021