StatusCake

Core Web Vitals Explained: What They Are, Why They Matter, and How to Improve Them

downtime

Last updated: January 28, 2026

Google’s Core Web Vitals (CWV) are a set of user‑experience metrics designed to quantify how real users experience the web. They are not abstract SEO scores, they are signals derived from how pages load, respond, and remain visually stable for users.

This guide is written as a practical, engineering‑led reference. Rather than focusing only on definitions or scores, it explains:

  • what Core Web Vitals measure;
  • why they matter for users and search visibility;
  • how to interpret CWV results; ans
  • where performance improvements actually come from

Throughout, the emphasis is on diagnosis and action, not just measurement.

What are Core Web Vitals?

Core Web Vitals are a subset of Google’s Page Experience signals. They focus on three aspects of user experience:

  • Loading performance. How quickly meaningful content appears
  • Responsiveness. How quickly the page responds to user interactions
  • Visual stability. Whether the layout shifts unexpectedly

Google evaluates these using aggregated real‑user data over time. Individual page loads may vary, but Core Web Vitals reflect the overall experience users have in the real world.

The three Core Web Vitals metrics

Largest Contentful Paint (LCP)

What it measures: Loading experience

LCP measures how long it takes for the largest visible element (such as a hero image or main heading) to render within the viewport.

  • Good: ≤ 2.5 seconds
  • Needs improvement: 2.5–4.0 seconds
  • Poor: > 4.0 seconds

A slow LCP usually indicates server response delays, large media assets, or render‑blocking resources.

Interaction to Next Paint (INP)

What it measures: Responsiveness

INP measures how quickly a page responds to user interactions, such as clicks or taps. It replaced First Input Delay (FID) as Google’s primary responsiveness metric.

  • Good: ≤ 200 ms
  • Needs improvement: 200–500 ms
  • Poor: > 500 ms

Poor INP scores are commonly caused by heavy JavaScript execution or long tasks blocking the main thread.

Cumulative Layout Shift (CLS)

What it measures: Visual stability

CLS measures how much the layout shifts unexpectedly during page load or interaction.

  • Good: ≤ 0.1
  • Needs improvement: 0.1–0.25
  • Poor: > 0.25

High CLS often results from images, fonts, or ads loading without reserved space.

Why Core Web Vitals matter

User experience

Pages that load quickly, respond immediately, and remain visually stable are easier and more pleasant to use. Poor Core Web Vitals often correlate with frustration, misclicks, and abandonment.

Search visibility

Core Web Vitals are part of Google’s Page Experience signals. While they are not the sole ranking factor, consistently poor CWV performance can limit a page’s ability to compete in search results.

Business outcomes

Performance issues frequently impact conversion rates, engagement, and retention. Improving Core Web Vitals often delivers benefits beyond SEO alone.

Core Web Vitals scores vs performance diagnostics

Core Web Vitals scores indicate how a page performed for users. They do not explain why it performed that way.

CWV scores are outcome metrics. They are useful for benchmarking and prioritisation, but meaningful improvements require diagnostic insight into what happens during page load and execution. This is where page speed monitoring and request waterfalls become essential.

Using page speed monitoring to improve Core Web Vitals

Page speed monitoring provides repeatable tests that show how a page loads under controlled conditions. While page speed metrics are not identical to Core Web Vitals, they strongly correlate with them and are one of the most practical ways to identify performance bottlenecks.

For most teams, page speed monitoring is the fastest way to:

  • detect regressions;
  • compare performance before and after changes;
  • identify slow or unstable pages; and
  • prioritise optimisation work.

How waterfall analysis reveals Core Web Vitals bottlenecks

A page speed waterfall visualises every request made during page load and how long each takes. This turns performance issues into concrete, actionable problems.

Diagnosing LCP issues

A waterfall helps identify:

  • slow server response times;
  • large images or media files; or
  • render‑blocking CSS or fonts.

If the largest visual element appears late in the waterfall, it often explains a poor LCP score.

Diagnosing INP issues

While INP is influenced by real user interactions, waterfalls often reveal contributing factors such as:

  • large JavaScript bundles;
  • long‑running scripts; and
  • third‑party resources delaying interactivity.

These patterns frequently correlate with responsiveness problems.

Diagnosing CLS issues

Waterfalls highlight resources that load late and cause layout shifts, including:

  • fonts without proper loading strategies;
  • dynamically injected content; and
  • ads or embeds without reserved space.

Mapping Core Web Vitals to fixes

Once you understand where time is being spent during page load, the next step is deciding what to change. The table below maps each Core Web Vitals metric to common symptoms, likely causes, and the areas teams typically optimise.

CWV metric What you’ll see Common causes Where to look in the waterfall Typical fixes
LCP Main content appears late Slow TTFB, large images, blocking CSS Long initial request, late-loading hero asset Image optimisation, caching, CSS prioritisation
INP Page feels sluggish to interact Heavy JS, long tasks, third-party scripts Large JS bundles, long execution gaps Code splitting, deferring scripts, reducing JS
CLS Page jumps during load Late fonts, ads, injected content Resources loading after render Reserve space, fix font loading, stabilise embeds

A practical workflow for improving Core Web Vitals

  1. Establish a baseline using page speed monitoring
  2. Identify slow or unstable pages
  3. Use waterfall analysis to locate bottlenecks
  4. Make targeted changes (images, scripts, fonts, caching)
  5. Re‑test to validate improvements
  6. Monitor trends over time rather than individual scores

This approach focuses on continuous improvement rather than chasing isolated metrics.

Common causes of poor Core Web Vitals

  • Slow server response times
  • Large, unoptimised images
  • Excessive or blocking JavaScript
  • Third‑party scripts
  • Layout‑affecting content loaded late

Many of these issues are visible immediately in a request waterfall.

Frequently asked questions about Core Web Vitals

Are Core Web Vitals the same for every page?

No. Core Web Vitals are measured per UR. Different pages can have very different scores depending on content and complexity.

Do Core Web Vitals matter more on mobile than desktop?

Yes. Google primarily evaluates Core Web Vitals using mobile user data, reflecting real‑world usage patterns.

Can Core Web Vitals fluctuate over time?

Yes. Scores can change due to deployments, traffic patterns, infrastructure changes, or user behaviour.

Is improving Core Web Vitals a one‑time task?

No. Performance is an ongoing concern. Regular monitoring helps prevent regressions and maintain a good user experience.

How monitoring tools support Core Web Vitals improvements

Page speed monitoring tools, such as StatusCake, provide visibility into load behaviour and performance trends. Waterfall views make bottlenecks explicit, helping teams understand where changes will have the greatest impact on user experience.

The value lies not in the score itself, but in the ability to diagnose and improve performance over time.

Share this

More from StatusCake

Engineering

Beyond Uptime: Building a Self-Healing OpenClaw Observability Stack

3 min read The allure of OpenClaw is undeniable. You deploy a highly autonomous, self-hosted AI agent, give it access to your repositories and inboxes, and watch it reason through complex workflows while you sleep. It is the dream of the ultimate 10x developer tool realized. But as any veteran DevOps engineer will tell you: running an LLM-backed

When AWS us-east-1 Fails, Much of the Internet Fails With It

7 min read There are cloud outages, and then there are us-east-1 outages. That distinction matters because failures in AWS’s Northern Virginia region rarely feel like ordinary regional incidents. They tend instead to expose something larger and more uncomfortable: too much of the modern internet still behaves as though one place is an acceptable concentration point for infrastructure,

In the Age of AI, Operational Memory Matters Most During Incidents

7 min read Artificial intelligence is making software easier to produce. That much is already obvious. Code that once took hours to scaffold can now be drafted in minutes. Boilerplate, integration logic, tests, refactors and small internal tools can be generated with startling speed. In some cases, even substantial pieces of implementation can be assembled quickly enough to

AI Didn’t Kill the SDLC. It Made It Harder to See

10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about

When Code Becomes Cheap: The New Reliability Constraint in Software Engineering

4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,

Buy vs Build in the Age of AI (Part 3)

5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t

Want to know how much website downtime costs, and the impact it can have on your business?

Find out everything you need to know in our new uptime monitoring whitepaper 2021

*By providing your email address, you agree to our privacy policy and to receive marketing communications from StatusCake.