
Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Last updated: January 28, 2026
Google’s Core Web Vitals (CWV) are a set of user‑experience metrics designed to quantify how real users experience the web. They are not abstract SEO scores, they are signals derived from how pages load, respond, and remain visually stable for users.
This guide is written as a practical, engineering‑led reference. Rather than focusing only on definitions or scores, it explains:
Throughout, the emphasis is on diagnosis and action, not just measurement.
Core Web Vitals are a subset of Google’s Page Experience signals. They focus on three aspects of user experience:
Google evaluates these using aggregated real‑user data over time. Individual page loads may vary, but Core Web Vitals reflect the overall experience users have in the real world.
What it measures: Loading experience
LCP measures how long it takes for the largest visible element (such as a hero image or main heading) to render within the viewport.
A slow LCP usually indicates server response delays, large media assets, or render‑blocking resources.
What it measures: Responsiveness
INP measures how quickly a page responds to user interactions, such as clicks or taps. It replaced First Input Delay (FID) as Google’s primary responsiveness metric.
Poor INP scores are commonly caused by heavy JavaScript execution or long tasks blocking the main thread.
What it measures: Visual stability
CLS measures how much the layout shifts unexpectedly during page load or interaction.
High CLS often results from images, fonts, or ads loading without reserved space.
Pages that load quickly, respond immediately, and remain visually stable are easier and more pleasant to use. Poor Core Web Vitals often correlate with frustration, misclicks, and abandonment.
Core Web Vitals are part of Google’s Page Experience signals. While they are not the sole ranking factor, consistently poor CWV performance can limit a page’s ability to compete in search results.
Performance issues frequently impact conversion rates, engagement, and retention. Improving Core Web Vitals often delivers benefits beyond SEO alone.
Core Web Vitals scores indicate how a page performed for users. They do not explain why it performed that way.
CWV scores are outcome metrics. They are useful for benchmarking and prioritisation, but meaningful improvements require diagnostic insight into what happens during page load and execution. This is where page speed monitoring and request waterfalls become essential.
Page speed monitoring provides repeatable tests that show how a page loads under controlled conditions. While page speed metrics are not identical to Core Web Vitals, they strongly correlate with them and are one of the most practical ways to identify performance bottlenecks.
For most teams, page speed monitoring is the fastest way to:
A page speed waterfall visualises every request made during page load and how long each takes. This turns performance issues into concrete, actionable problems.
A waterfall helps identify:
If the largest visual element appears late in the waterfall, it often explains a poor LCP score.
While INP is influenced by real user interactions, waterfalls often reveal contributing factors such as:
These patterns frequently correlate with responsiveness problems.
Waterfalls highlight resources that load late and cause layout shifts, including:
Once you understand where time is being spent during page load, the next step is deciding what to change. The table below maps each Core Web Vitals metric to common symptoms, likely causes, and the areas teams typically optimise.
| CWV metric | What you’ll see | Common causes | Where to look in the waterfall | Typical fixes |
|---|---|---|---|---|
| LCP | Main content appears late | Slow TTFB, large images, blocking CSS | Long initial request, late-loading hero asset | Image optimisation, caching, CSS prioritisation |
| INP | Page feels sluggish to interact | Heavy JS, long tasks, third-party scripts | Large JS bundles, long execution gaps | Code splitting, deferring scripts, reducing JS |
| CLS | Page jumps during load | Late fonts, ads, injected content | Resources loading after render | Reserve space, fix font loading, stabilise embeds |
This approach focuses on continuous improvement rather than chasing isolated metrics.
Many of these issues are visible immediately in a request waterfall.
No. Core Web Vitals are measured per UR. Different pages can have very different scores depending on content and complexity.
Yes. Google primarily evaluates Core Web Vitals using mobile user data, reflecting real‑world usage patterns.
Yes. Scores can change due to deployments, traffic patterns, infrastructure changes, or user behaviour.
No. Performance is an ongoing concern. Regular monitoring helps prevent regressions and maintain a good user experience.
Page speed monitoring tools, such as StatusCake, provide visibility into load behaviour and performance trends. Waterfall views make bottlenecks explicit, helping teams understand where changes will have the greatest impact on user experience.
The value lies not in the score itself, but in the ability to diagnose and improve performance over time.
Share this

3 min read The allure of OpenClaw is undeniable. You deploy a highly autonomous, self-hosted AI agent, give it access to your repositories and inboxes, and watch it reason through complex workflows while you sleep. It is the dream of the ultimate 10x developer tool realized. But as any veteran DevOps engineer will tell you: running an LLM-backed
7 min read There are cloud outages, and then there are us-east-1 outages. That distinction matters because failures in AWS’s Northern Virginia region rarely feel like ordinary regional incidents. They tend instead to expose something larger and more uncomfortable: too much of the modern internet still behaves as though one place is an acceptable concentration point for infrastructure,
7 min read Artificial intelligence is making software easier to produce. That much is already obvious. Code that once took hours to scaffold can now be drafted in minutes. Boilerplate, integration logic, tests, refactors and small internal tools can be generated with startling speed. In some cases, even substantial pieces of implementation can be assembled quickly enough to
10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about
4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,
5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t
Find out everything you need to know in our new uptime monitoring whitepaper 2021