StatusCake

The do’s and don’ts of front-end development that adhere to Core Web Vitals

website downtime

Core Web Vitals are a set of metrics that allow you to determine how fast, visually stable and responsive your site is. The front-end takes the bulk of the responsibility to make sure that your website is scoring highly in each of these metrics, so in this blog post I will outline what we should be doing (as well as what we should be avoiding) to score highly in them.

What are Core Web Vitals?

In order to meet a good standard for each of these metrics, we firstly need to understand what they are. There are primarily three metrics, each of which I will outline below.

LCP

The first metric is LCP which stands for largest contentful paint. This measures the time that it takes for the largest piece of content on your page (e.g. a banner or an image) to fully load in and become visible. 

FID

The second metric tracks the responsiveness of your site and is known as FID (first input delay). It does this by determining the reaction time of your site after a user interacts with it (e.g. clicks a button or types into an input field). 

CLS

The third and final metric that is measured under the core web vitals is CLS (cumulative layout shift). This tracks how much the elements on the page move around as content is loaded in on the page’s initial render.

How do we test for them?

So how do we track these metrics on our site? pagespeed.web.dev is a brilliant tool for this. Just type in the URL you want to measure and it will give you an in-depth report (for both the mobile and desktop version of the page) on how this webpage scores in each of the metrics. 

You will receive a label of either good (green), needs improvement (amber) or poor (red). These labels are important because not only does a ‘good’ label for each mean that the users of your site will have a much more slick experience, but you will also receive a potential SEO ranking boost on Google.

What to Avoid 👎

So now let’s get into the details on what to avoid, in order to ensure we are writing optimised front-end code that maximises our chance of scoring highly in these metrics. 

Slow, Unoptimised Images

The first thing we should try to avoid is loading weighty, unoptimised images (or videos for that matter) onto the page. This is because they could potentially take a while to load in on slower connections, which (if they take up a big chunk of the page) will lead to a lower LCP value. They could also cause a layout shift, if other elements on the page load in faster than them. We can get around this by making sure that if we absolutely need to use high resolution assets, they are as compressed and optimised as they possibly can be. 

NextJS recently released an image component in version 10, that uses a built in image optimisation API. This optimises your images and then serves them directly from the NextJS web server. You can also provide it with width and height variables in order to prevent CLS while the image is loading in too. 

Dynamically Injecting Content

On the topic of layout shift, we should also try to avoid dynamically injecting content onto a webpage. Doing this may reposition some of the already rendered elements on the page, as the injected element would load in at a later point, leading to a shift in the layout. We can avoid this shift when injecting content by reserving space on the page (e.g. by adding a div container) with the exact height and width of the injected elements. This will ensure no repositioning of the surrounding elements on the page when the external code is loaded in.

What we should be doing 👍

So instead of looking for things to avoid in order to score highly in these core web vital metrics, let’s discuss what we can actively do instead. 

Server Side Rendering 

Our best bet is to embrace Server Side Rendering.  If using tools like React to create great client side content, NextJS can be integrated into your project to ensure you can render your content and components server side as well as on the client. This helps with our vitals, as there will then be minimal CLS between the server and client side render and components will render in sooner for a quicker LCP.

Caching & Bundling

Second of all, ensuring your assets (e.g. fonts or images) are cached correctly between renders massively speeds up the time it takes for them to load onto the page. Similarly, we should ensure all of our code is compressed and bundled as much as it possibly can be. Luckily most modern frameworks come with this bundling already built in, so when the project is built for production, the hard work is already done for us. 

DNS Prefetching

One last thing we can do to ensure our code is as optimised as possible (and therefore score higher in the metrics), is to make the most of DNS prefetching. This is known as resource hinting and attempts to resolve domain names before a link is clicked on a site. This helps to speed up FID for us.

Conclusion 

To conclude, there are many great reasons that we should be trying to score as high as possible in each of the web vital scores. However, as long as we are making sure our code is as optimised as it can be, and we are aware of some of the caveats of injecting code and heavy assets to the site, we should be on to a winner!

Share this

More from StatusCake

AI Didn’t Kill the SDLC. It Made It Harder to See

10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about

When Code Becomes Cheap: The New Reliability Constraint in Software Engineering

4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,

Buy vs Build in the Age of AI (Part 3)

5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t

Buy vs Build in the Age of AI (Part 2)

6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff

Buy vs Build in the Age of AI (Part 1)

5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything

Alerting Is a Socio-Technical System

3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting

Want to know how much website downtime costs, and the impact it can have on your business?

Find out everything you need to know in our new uptime monitoring whitepaper 2021

*By providing your email address, you agree to our privacy policy and to receive marketing communications from StatusCake.