
Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



We recently sent out a customer survey and in which we asked users what they thought of every aspect of StatusCake – the single aspect which came in top in terms of marks was reliability and trustworthiness of alerts. It’s the core of our product so it makes sense that we want to get it right and as the survey showed it was clear we were hitting the nail on the head in almost all cases – but there was a niggling 2% of users who rated it under 5/5 and it’s only right we don’t ignore that.
Over the past few days we’ve been doing micro improvements to try to improve the speed of delivery for downtime alerts and this has manifested as a powerful set of improvements.
Firstly we’ve made changes to the Alert Trigger Rate. The vast majority of our users have set their trigger rate to be around 5 minutes ensuring they don’t get bothered about small periods of downtime, but what exactly happens at 5 minutes?
Previous to now your checks would continue on their normal check rate and upon each check the system would see if the current span between the point of downtime first detected to the current check was greater than trigger rate minutes then send out an alert. Sorry if that sounds confusing – it is! But we’ve simplified things to improve how quickly you get alerted – now when you have a 5 minute trigger rate you will get an alert on that 5th minute, no matter your check rate. As soon as your site is detected as down we will now check every 30 seconds until the trigger rate is hit – we’ll also check 20 seconds before your set trigger rate.
We’ve also introduced a better system for detecting the type of downtime and thus adjusting the confirmation servers as a result. If one system detects downtime as being content match then rather than just attempting once to see if the content match has failed on that test each confirmation server will take 3 attempts at loading the test, this way it’s much more likely to catch any downtime such as micro issues that only appear every so often.
I hope this helps explain a bit of the improvements we’ve rolled out today, but if I’m rambled on and make no sense I’ll summaries – we’ve got even better at sending alerts!
Share this

3 min read The allure of OpenClaw is undeniable. You deploy a highly autonomous, self-hosted AI agent, give it access to your repositories and inboxes, and watch it reason through complex workflows while you sleep. It is the dream of the ultimate 10x developer tool realized. But as any veteran DevOps engineer will tell you: running an LLM-backed
7 min read There are cloud outages, and then there are us-east-1 outages. That distinction matters because failures in AWS’s Northern Virginia region rarely feel like ordinary regional incidents. They tend instead to expose something larger and more uncomfortable: too much of the modern internet still behaves as though one place is an acceptable concentration point for infrastructure,
7 min read Artificial intelligence is making software easier to produce. That much is already obvious. Code that once took hours to scaffold can now be drafted in minutes. Boilerplate, integration logic, tests, refactors and small internal tools can be generated with startling speed. In some cases, even substantial pieces of implementation can be assembled quickly enough to
10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about
4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,
5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t
Find out everything you need to know in our new uptime monitoring whitepaper 2021