Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



It’s that time of year again – no, not Christmas, but the hugely anticipated Black Friday. When discounts hit bigger numbers than the lottery, and customers get into a bargain-hunting frenzy. But it’s not all fun and games as a company owner during the biggest sales season of the year; unfortunately, you’re more likely to suffer website issues than on an average day.
Good question. Websites suffer during the week of the 22nd November, and especially on the 26th November every year because of huge volumes of traffic that hit them. Contrary to popular belief, a website can actually very easily “break”, and even more so during a sales period like this.
Surges in website traffic can cause troubles with your server, your page speed, and a whole host of other fundamental elements of a high-performing website.
It’s easy for us as an uptime monitoring solution to say “hey you! You need our uptime monitoring solution”, but it’s actually true and more than ever during this time. For example, if you get 20% more traffic to your website on the 26th November but your checkout page goes down and you have no idea about it until 20 minutes later, how many sales would you have lost? How much revenue will that equate to? How does that affect your Q4 goals?


Although Black Friday puts an intense amount of pressure on your website, this isn’t the only time that you need a website monitoring tool working in the background of your website. Your website can go down at any time, anywhere, and it can take hours for you to realise, and even longer for your team to get it back up and running. If you’re not convinced, then you may be interested to know that even the biggest websites in the world have experienced downtime including Google, Facebook, and Slack, to name just a few.
You can prepare this Black Friday by taking advantage of our 40% off any paid plan discount. Stay online, drive revenue = simple!
Share this
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things
4 min read Why strong reviews, accountability, and monitoring matter more in an AI-assisted world Artificial intelligence has become the latest fault line in software development. For some teams, it’s an obvious productivity multiplier. For others, it’s viewed with suspicion. A source of low-quality code, unreviewable pull requests, and latent production risk. One concern we hear frequently goes
Find out everything you need to know in our new uptime monitoring whitepaper 2021