StatusCake

7 stats about website downtime that will blow your mind

uptime

We believe that data speaks volumes and that’s why we’ve put together the most shocking 7 stats that we’ve found from our uptime monitoring report. You have to see them to believe them!

How many people experience website downtime

If we had £1 for every time someone said that website downtime wasn’t going to affect them, we’d each be millionaires hundreds of times over (and what a life that would be!). But unfortunately for these people, downtime doesn’t work like that, it doesn’t discriminate and will catch you out when you least expect it. 

Our latest statistic found that:

71% of companies experienced unplanned partial website downtime, as well as full unplanned website downtime. 100% of companies experienced planned downtime. Partial downtime where individual pages or areas of a website are unavailable and full downtime whereby all pages are completely unavailable (StatusCake survey 2021)

What happens if I don’t have an uptime or website monitoring solution

It’s easy to think that a short amount of website downtime will have little or no effect on your revenue or your customers. Wrong. We’ve found time and time again that there is a significant impact on a multitude of different areas in your business when your website goes down. But that’s not all, domain issues, server outages, slow page speed and outdated SSL certificates can all cause significant damage too.

Take this statistic for example:

79% of websites we surveyed experienced one or more of the following:

  • Lost domain, either through domain hijackers or expiry
  • Loss of customers due to an unsecure website (lack of SSL certificate)
  • Loss of revenue due to unforeseen downtime
  • Loss of trust from customers due to server downtime
  • Loss of customers to competitors due to low page speed
  • Loss of SEO rankings due to all of the above

How do people find out about website downtime

It can be easy to think that you’ll “just know” when your website goes down, like a sixth sense. Unfortunately, this is never the case and the likelihood of your team picking up on it before anyone else is also very slim. 

See why we know this here:

8/10 people who weren’t using an uptime monitoring tool, said that they found out about website downtime through customers emailing them, or @’ing them on social media

But what does website downtime actually mean?

If you’ve read all of the stats and are sat there wondering what on earth we actually mean by website downtime, you’re not the online one:

According to our recent survey of 1500 people, 32% of people didn’t know what website downtime meant.

Website downtime can mean partial or full “downtime” that causes your website to go offline or be inaccessible. 

What do customers think of this?

We are all customers online, whether it’s for online banking, grocery shopping, or endless Just Eat deliveries (not me, of course). So what do we as customers think of website downtime? Ultimately, do we care?

When we surveyed 1500 people on how likely they are to revisit a website after they couldn’t access it just one time, only 11% said they would return, stating that trusting the website would be difficult after. 

How often does website downtime really happen?

Out of our 140,000 customers that use our uptime monitoring tool, 32% consider themselves to be a small company, 41% a medium company, and 27% a large company. Of those 32% of small companies, they experience downtime, on average, at least 18 times a month

What’s one example of an outage that affected global companies?

There are plenty that we could use as an example, especially with the likes of Facebook experiencing regular downtime and causing companies that use their paid ads feature to lose millions in revenue. But let’s take Fastly’s outage as an example, StatusCake saw 58% more websites go down during this time compared to the previous week, proving how heavily reliant websites are on CDNs.

Share this

More from StatusCake

Engineering

Beyond Uptime: Building a Self-Healing OpenClaw Observability Stack

3 min read The allure of OpenClaw is undeniable. You deploy a highly autonomous, self-hosted AI agent, give it access to your repositories and inboxes, and watch it reason through complex workflows while you sleep. It is the dream of the ultimate 10x developer tool realized. But as any veteran DevOps engineer will tell you: running an LLM-backed

When AWS us-east-1 Fails, Much of the Internet Fails With It

7 min read There are cloud outages, and then there are us-east-1 outages. That distinction matters because failures in AWS’s Northern Virginia region rarely feel like ordinary regional incidents. They tend instead to expose something larger and more uncomfortable: too much of the modern internet still behaves as though one place is an acceptable concentration point for infrastructure,

In the Age of AI, Operational Memory Matters Most During Incidents

7 min read Artificial intelligence is making software easier to produce. That much is already obvious. Code that once took hours to scaffold can now be drafted in minutes. Boilerplate, integration logic, tests, refactors and small internal tools can be generated with startling speed. In some cases, even substantial pieces of implementation can be assembled quickly enough to

AI Didn’t Kill the SDLC. It Made It Harder to See

10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about

When Code Becomes Cheap: The New Reliability Constraint in Software Engineering

4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,

Buy vs Build in the Age of AI (Part 3)

5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t

Want to know how much website downtime costs, and the impact it can have on your business?

Find out everything you need to know in our new uptime monitoring whitepaper 2021

*By providing your email address, you agree to our privacy policy and to receive marketing communications from StatusCake.