StatusCake

The do’s and don’ts of front-end development that adhere to Core Web Vitals

website downtime

Core Web Vitals are a set of metrics that allow you to determine how fast, visually stable and responsive your site is. The front-end takes the bulk of the responsibility to make sure that your website is scoring highly in each of these metrics, so in this blog post I will outline what we should be doing (as well as what we should be avoiding) to score highly in them.

What are Core Web Vitals?

In order to meet a good standard for each of these metrics, we firstly need to understand what they are. There are primarily three metrics, each of which I will outline below.

LCP

The first metric is LCP which stands for largest contentful paint. This measures the time that it takes for the largest piece of content on your page (e.g. a banner or an image) to fully load in and become visible. 

FID

The second metric tracks the responsiveness of your site and is known as FID (first input delay). It does this by determining the reaction time of your site after a user interacts with it (e.g. clicks a button or types into an input field). 

CLS

The third and final metric that is measured under the core web vitals is CLS (cumulative layout shift). This tracks how much the elements on the page move around as content is loaded in on the page’s initial render.

How do we test for them?

So how do we track these metrics on our site? pagespeed.web.dev is a brilliant tool for this. Just type in the URL you want to measure and it will give you an in-depth report (for both the mobile and desktop version of the page) on how this webpage scores in each of the metrics. 

You will receive a label of either good (green), needs improvement (amber) or poor (red). These labels are important because not only does a ‘good’ label for each mean that the users of your site will have a much more slick experience, but you will also receive a potential SEO ranking boost on Google.

What to Avoid 👎

So now let’s get into the details on what to avoid, in order to ensure we are writing optimised front-end code that maximises our chance of scoring highly in these metrics. 

Slow, Unoptimised Images

The first thing we should try to avoid is loading weighty, unoptimised images (or videos for that matter) onto the page. This is because they could potentially take a while to load in on slower connections, which (if they take up a big chunk of the page) will lead to a lower LCP value. They could also cause a layout shift, if other elements on the page load in faster than them. We can get around this by making sure that if we absolutely need to use high resolution assets, they are as compressed and optimised as they possibly can be. 

NextJS recently released an image component in version 10, that uses a built in image optimisation API. This optimises your images and then serves them directly from the NextJS web server. You can also provide it with width and height variables in order to prevent CLS while the image is loading in too. 

Dynamically Injecting Content

On the topic of layout shift, we should also try to avoid dynamically injecting content onto a webpage. Doing this may reposition some of the already rendered elements on the page, as the injected element would load in at a later point, leading to a shift in the layout. We can avoid this shift when injecting content by reserving space on the page (e.g. by adding a div container) with the exact height and width of the injected elements. This will ensure no repositioning of the surrounding elements on the page when the external code is loaded in.

What we should be doing 👍

So instead of looking for things to avoid in order to score highly in these core web vital metrics, let’s discuss what we can actively do instead. 

Server Side Rendering 

Our best bet is to embrace Server Side Rendering.  If using tools like React to create great client side content, NextJS can be integrated into your project to ensure you can render your content and components server side as well as on the client. This helps with our vitals, as there will then be minimal CLS between the server and client side render and components will render in sooner for a quicker LCP.

Caching & Bundling

Second of all, ensuring your assets (e.g. fonts or images) are cached correctly between renders massively speeds up the time it takes for them to load onto the page. Similarly, we should ensure all of our code is compressed and bundled as much as it possibly can be. Luckily most modern frameworks come with this bundling already built in, so when the project is built for production, the hard work is already done for us. 

DNS Prefetching

One last thing we can do to ensure our code is as optimised as possible (and therefore score higher in the metrics), is to make the most of DNS prefetching. This is known as resource hinting and attempts to resolve domain names before a link is clicked on a site. This helps to speed up FID for us.

Conclusion 

To conclude, there are many great reasons that we should be trying to score as high as possible in each of the web vital scores. However, as long as we are making sure our code is as optimised as it can be, and we are aware of some of the caveats of injecting code and heavy assets to the site, we should be on to a winner!

Share this

More from StatusCake

Designing Alerts for Action

3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams

A Notification List Is Not a Team

3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years

Alert Noise Isn’t an Accident — It’s a Design Decision

3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.

The Incident Checklist: Reducing Cognitive Load When It Matters Most

4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already

When Things Go Wrong, Systems Should Help Humans — Not Fight Them

3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,

When AI Speeds Up Change, Knowing First Becomes the Constraint

5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things

Want to know how much website downtime costs, and the impact it can have on your business?

Find out everything you need to know in our new uptime monitoring whitepaper 2021

*By providing your email address, you agree to our privacy policy and to receive marketing communications from StatusCake.