StatusCake

Core Web Vitals Explained: What They Are, Why They Matter, and How to Improve Them

downtime

Last updated: January 28, 2026

Google’s Core Web Vitals (CWV) are a set of user‑experience metrics designed to quantify how real users experience the web. They are not abstract SEO scores, they are signals derived from how pages load, respond, and remain visually stable for users.

This guide is written as a practical, engineering‑led reference. Rather than focusing only on definitions or scores, it explains:

  • what Core Web Vitals measure;
  • why they matter for users and search visibility;
  • how to interpret CWV results; ans
  • where performance improvements actually come from

Throughout, the emphasis is on diagnosis and action, not just measurement.

What are Core Web Vitals?

Core Web Vitals are a subset of Google’s Page Experience signals. They focus on three aspects of user experience:

  • Loading performance. How quickly meaningful content appears
  • Responsiveness. How quickly the page responds to user interactions
  • Visual stability. Whether the layout shifts unexpectedly

Google evaluates these using aggregated real‑user data over time. Individual page loads may vary, but Core Web Vitals reflect the overall experience users have in the real world.

The three Core Web Vitals metrics

Largest Contentful Paint (LCP)

What it measures: Loading experience

LCP measures how long it takes for the largest visible element (such as a hero image or main heading) to render within the viewport.

  • Good: ≤ 2.5 seconds
  • Needs improvement: 2.5–4.0 seconds
  • Poor: > 4.0 seconds

A slow LCP usually indicates server response delays, large media assets, or render‑blocking resources.

Interaction to Next Paint (INP)

What it measures: Responsiveness

INP measures how quickly a page responds to user interactions, such as clicks or taps. It replaced First Input Delay (FID) as Google’s primary responsiveness metric.

  • Good: ≤ 200 ms
  • Needs improvement: 200–500 ms
  • Poor: > 500 ms

Poor INP scores are commonly caused by heavy JavaScript execution or long tasks blocking the main thread.

Cumulative Layout Shift (CLS)

What it measures: Visual stability

CLS measures how much the layout shifts unexpectedly during page load or interaction.

  • Good: ≤ 0.1
  • Needs improvement: 0.1–0.25
  • Poor: > 0.25

High CLS often results from images, fonts, or ads loading without reserved space.

Why Core Web Vitals matter

User experience

Pages that load quickly, respond immediately, and remain visually stable are easier and more pleasant to use. Poor Core Web Vitals often correlate with frustration, misclicks, and abandonment.

Search visibility

Core Web Vitals are part of Google’s Page Experience signals. While they are not the sole ranking factor, consistently poor CWV performance can limit a page’s ability to compete in search results.

Business outcomes

Performance issues frequently impact conversion rates, engagement, and retention. Improving Core Web Vitals often delivers benefits beyond SEO alone.

Core Web Vitals scores vs performance diagnostics

Core Web Vitals scores indicate how a page performed for users. They do not explain why it performed that way.

CWV scores are outcome metrics. They are useful for benchmarking and prioritisation, but meaningful improvements require diagnostic insight into what happens during page load and execution. This is where page speed monitoring and request waterfalls become essential.

Using page speed monitoring to improve Core Web Vitals

Page speed monitoring provides repeatable tests that show how a page loads under controlled conditions. While page speed metrics are not identical to Core Web Vitals, they strongly correlate with them and are one of the most practical ways to identify performance bottlenecks.

For most teams, page speed monitoring is the fastest way to:

  • detect regressions;
  • compare performance before and after changes;
  • identify slow or unstable pages; and
  • prioritise optimisation work.

How waterfall analysis reveals Core Web Vitals bottlenecks

A page speed waterfall visualises every request made during page load and how long each takes. This turns performance issues into concrete, actionable problems.

Diagnosing LCP issues

A waterfall helps identify:

  • slow server response times;
  • large images or media files; or
  • render‑blocking CSS or fonts.

If the largest visual element appears late in the waterfall, it often explains a poor LCP score.

Diagnosing INP issues

While INP is influenced by real user interactions, waterfalls often reveal contributing factors such as:

  • large JavaScript bundles;
  • long‑running scripts; and
  • third‑party resources delaying interactivity.

These patterns frequently correlate with responsiveness problems.

Diagnosing CLS issues

Waterfalls highlight resources that load late and cause layout shifts, including:

  • fonts without proper loading strategies;
  • dynamically injected content; and
  • ads or embeds without reserved space.

Mapping Core Web Vitals to fixes

Once you understand where time is being spent during page load, the next step is deciding what to change. The table below maps each Core Web Vitals metric to common symptoms, likely causes, and the areas teams typically optimise.

CWV metric What you’ll see Common causes Where to look in the waterfall Typical fixes
LCP Main content appears late Slow TTFB, large images, blocking CSS Long initial request, late-loading hero asset Image optimisation, caching, CSS prioritisation
INP Page feels sluggish to interact Heavy JS, long tasks, third-party scripts Large JS bundles, long execution gaps Code splitting, deferring scripts, reducing JS
CLS Page jumps during load Late fonts, ads, injected content Resources loading after render Reserve space, fix font loading, stabilise embeds

A practical workflow for improving Core Web Vitals

  1. Establish a baseline using page speed monitoring
  2. Identify slow or unstable pages
  3. Use waterfall analysis to locate bottlenecks
  4. Make targeted changes (images, scripts, fonts, caching)
  5. Re‑test to validate improvements
  6. Monitor trends over time rather than individual scores

This approach focuses on continuous improvement rather than chasing isolated metrics.

Common causes of poor Core Web Vitals

  • Slow server response times
  • Large, unoptimised images
  • Excessive or blocking JavaScript
  • Third‑party scripts
  • Layout‑affecting content loaded late

Many of these issues are visible immediately in a request waterfall.

Frequently asked questions about Core Web Vitals

Are Core Web Vitals the same for every page?

No. Core Web Vitals are measured per UR. Different pages can have very different scores depending on content and complexity.

Do Core Web Vitals matter more on mobile than desktop?

Yes. Google primarily evaluates Core Web Vitals using mobile user data, reflecting real‑world usage patterns.

Can Core Web Vitals fluctuate over time?

Yes. Scores can change due to deployments, traffic patterns, infrastructure changes, or user behaviour.

Is improving Core Web Vitals a one‑time task?

No. Performance is an ongoing concern. Regular monitoring helps prevent regressions and maintain a good user experience.

How monitoring tools support Core Web Vitals improvements

Page speed monitoring tools, such as StatusCake, provide visibility into load behaviour and performance trends. Waterfall views make bottlenecks explicit, helping teams understand where changes will have the greatest impact on user experience.

The value lies not in the score itself, but in the ability to diagnose and improve performance over time.

Share this

More from StatusCake

Alerting Is a Socio-Technical System

3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting

Designing Alerts for Action

3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams

A Notification List Is Not a Team

3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years

Alert Noise Isn’t an Accident — It’s a Design Decision

3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.

The Incident Checklist: Reducing Cognitive Load When It Matters Most

4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already

When Things Go Wrong, Systems Should Help Humans — Not Fight Them

3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,

Want to know how much website downtime costs, and the impact it can have on your business?

Find out everything you need to know in our new uptime monitoring whitepaper 2021

*By providing your email address, you agree to our privacy policy and to receive marketing communications from StatusCake.