
Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Core Web Vitals are a set of metrics that allow you to determine how fast, visually stable and responsive your site is. The front-end takes the bulk of the responsibility to make sure that your website is scoring highly in each of these metrics, so in this blog post I will outline what we should be doing (as well as what we should be avoiding) to score highly in them.
In order to meet a good standard for each of these metrics, we firstly need to understand what they are. There are primarily three metrics, each of which I will outline below.
The first metric is LCP which stands for largest contentful paint. This measures the time that it takes for the largest piece of content on your page (e.g. a banner or an image) to fully load in and become visible.
The second metric tracks the responsiveness of your site and is known as FID (first input delay). It does this by determining the reaction time of your site after a user interacts with it (e.g. clicks a button or types into an input field).
The third and final metric that is measured under the core web vitals is CLS (cumulative layout shift). This tracks how much the elements on the page move around as content is loaded in on the page’s initial render.
So how do we track these metrics on our site? pagespeed.web.dev is a brilliant tool for this. Just type in the URL you want to measure and it will give you an in-depth report (for both the mobile and desktop version of the page) on how this webpage scores in each of the metrics.
You will receive a label of either good (green), needs improvement (amber) or poor (red). These labels are important because not only does a ‘good’ label for each mean that the users of your site will have a much more slick experience, but you will also receive a potential SEO ranking boost on Google.
So now let’s get into the details on what to avoid, in order to ensure we are writing optimised front-end code that maximises our chance of scoring highly in these metrics.
The first thing we should try to avoid is loading weighty, unoptimised images (or videos for that matter) onto the page. This is because they could potentially take a while to load in on slower connections, which (if they take up a big chunk of the page) will lead to a lower LCP value. They could also cause a layout shift, if other elements on the page load in faster than them. We can get around this by making sure that if we absolutely need to use high resolution assets, they are as compressed and optimised as they possibly can be.
NextJS recently released an image component in version 10, that uses a built in image optimisation API. This optimises your images and then serves them directly from the NextJS web server. You can also provide it with width and height variables in order to prevent CLS while the image is loading in too.
On the topic of layout shift, we should also try to avoid dynamically injecting content onto a webpage. Doing this may reposition some of the already rendered elements on the page, as the injected element would load in at a later point, leading to a shift in the layout. We can avoid this shift when injecting content by reserving space on the page (e.g. by adding a div container) with the exact height and width of the injected elements. This will ensure no repositioning of the surrounding elements on the page when the external code is loaded in.
So instead of looking for things to avoid in order to score highly in these core web vital metrics, let’s discuss what we can actively do instead.
Our best bet is to embrace Server Side Rendering. If using tools like React to create great client side content, NextJS can be integrated into your project to ensure you can render your content and components server side as well as on the client. This helps with our vitals, as there will then be minimal CLS between the server and client side render and components will render in sooner for a quicker LCP.
Second of all, ensuring your assets (e.g. fonts or images) are cached correctly between renders massively speeds up the time it takes for them to load onto the page. Similarly, we should ensure all of our code is compressed and bundled as much as it possibly can be. Luckily most modern frameworks come with this bundling already built in, so when the project is built for production, the hard work is already done for us.
One last thing we can do to ensure our code is as optimised as possible (and therefore score higher in the metrics), is to make the most of DNS prefetching. This is known as resource hinting and attempts to resolve domain names before a link is clicked on a site. This helps to speed up FID for us.
To conclude, there are many great reasons that we should be trying to score as high as possible in each of the web vital scores. However, as long as we are making sure our code is as optimised as it can be, and we are aware of some of the caveats of injecting code and heavy assets to the site, we should be on to a winner!
Share this

3 min read The allure of OpenClaw is undeniable. You deploy a highly autonomous, self-hosted AI agent, give it access to your repositories and inboxes, and watch it reason through complex workflows while you sleep. It is the dream of the ultimate 10x developer tool realized. But as any veteran DevOps engineer will tell you: running an LLM-backed
7 min read There are cloud outages, and then there are us-east-1 outages. That distinction matters because failures in AWS’s Northern Virginia region rarely feel like ordinary regional incidents. They tend instead to expose something larger and more uncomfortable: too much of the modern internet still behaves as though one place is an acceptable concentration point for infrastructure,
7 min read Artificial intelligence is making software easier to produce. That much is already obvious. Code that once took hours to scaffold can now be drafted in minutes. Boilerplate, integration logic, tests, refactors and small internal tools can be generated with startling speed. In some cases, even substantial pieces of implementation can be assembled quickly enough to
10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about
4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,
5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t
Find out everything you need to know in our new uptime monitoring whitepaper 2021