Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Downtime, how much does it really ‘cost’ you? Well Benjamin Franklin first coined the phrase “time is money”. So when your website goes down, even if only for an hour, do you really know how much it has ‘cost’ you?
In 2008 online retailer Amazon.com had a two hour downtime. In this not time not only did it lose around $29,000 – $31,000 per minute (total loss estimated at $3.6 million) but the share price fell by 4.1% on the day – wiping around $3.12 billion off their stock market value.
Amazon isn’t alone in falling prey to website downtime. eBay reportedly lost around $4 million in revenues from one of its many infrastructure fall-downs; perhaps its worst episode coming back in January 2001 when the auction site was down for over ten hours after its both its main servers and back-up systems failed.
But as we saw in an earlier blog post – “The Importance of Website Monitoring” it’s not just lost revenues that you should be concerned about when your website goes down.
During the downtime at Amazon and eBay where did their customers go? Whilst some may argue that Amazon customers simply came back later, knowing it to be a temporary fault, is this really the case? Isn’t it far easier for customers in that “ready-to-purchase” frame of mind to simply click away to another website and buy that book, CD or DVD they’ve been looking for?
When UK food retailer Sainsburys experienced its own downtime in 2008, the big-data company Experian noted that 8.36% of Sainsburys’ traffic went to its main competitor Tesco.com and a further 1.38% went to ASDA. And of that 10% or so of traffic that went to competitors, did they ever go back to shop at Sainsburys online again – or at least to the save extent?
There seems little doubt then that unless you’re in an online business where you are truly the only player in the market, or your brand is really that strong and dominant, you cannot afford to allow your website to go down.
Yes in part – however big your infrastructure nothing is fool-proof and 100% uptime cannot be guaranteed. But perhaps the bigger lesson – eBay’s 2001 downtime was reportedly due to the company failing to upgrade its hardware as had been recommended. By cutting corners you’re only damaging the long-term success and stability of your business.
But most of all make sure you use website monitoring. This way the moment there is a problem with your website you’re on to it, getting the problem solved before you’ve lost your customers and your reputation!
Share this
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things
4 min read Why strong reviews, accountability, and monitoring matter more in an AI-assisted world Artificial intelligence has become the latest fault line in software development. For some teams, it’s an obvious productivity multiplier. For others, it’s viewed with suspicion. A source of low-quality code, unreviewable pull requests, and latent production risk. One concern we hear frequently goes

3 min read IPFS is a game-changer for decentralised storage and the future of the web, but it still requires active monitoring to ensure everything runs smoothly.
Find out everything you need to know in our new uptime monitoring whitepaper 2021