Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Downtime, how much does it really ‘cost’ you? Well Benjamin Franklin first coined the phrase “time is money”. So when your website goes down, even if only for an hour, do you really know how much it has ‘cost’ you?
In 2008 online retailer Amazon.com had a two hour downtime. In this not time not only did it lose around $29,000 – $31,000 per minute (total loss estimated at $3.6 million) but the share price fell by 4.1% on the day – wiping around $3.12 billion off their stock market value.
Amazon isn’t alone in falling prey to website downtime. eBay reportedly lost around $4 million in revenues from one of its many infrastructure fall-downs; perhaps its worst episode coming back in January 2001 when the auction site was down for over ten hours after its both its main servers and back-up systems failed.
But as we saw in an earlier blog post – “The Importance of Website Monitoring” it’s not just lost revenues that you should be concerned about when your website goes down.
During the downtime at Amazon and eBay where did their customers go? Whilst some may argue that Amazon customers simply came back later, knowing it to be a temporary fault, is this really the case? Isn’t it far easier for customers in that “ready-to-purchase” frame of mind to simply click away to another website and buy that book, CD or DVD they’ve been looking for?
When UK food retailer Sainsburys experienced its own downtime in 2008, the big-data company Experian noted that 8.36% of Sainsburys’ traffic went to its main competitor Tesco.com and a further 1.38% went to ASDA. And of that 10% or so of traffic that went to competitors, did they ever go back to shop at Sainsburys online again – or at least to the save extent?
There seems little doubt then that unless you’re in an online business where you are truly the only player in the market, or your brand is really that strong and dominant, you cannot afford to allow your website to go down.
Yes in part – however big your infrastructure nothing is fool-proof and 100% uptime cannot be guaranteed. But perhaps the bigger lesson – eBay’s 2001 downtime was reportedly due to the company failing to upgrade its hardware as had been recommended. By cutting corners you’re only damaging the long-term success and stability of your business.
But most of all make sure you use website monitoring. This way the moment there is a problem with your website you’re on to it, getting the problem solved before you’ve lost your customers and your reputation!
Share this
5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t
6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
Find out everything you need to know in our new uptime monitoring whitepaper 2021