Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



And just as importantly how do you update a setting over your entire test collection? With that in mind today we’re happy to announce two brand new features that will make your life easier with StatusCake.
You can now tag your tests – the tags can be anything you choose for example you may want to mark what client the site belongs to, or what server a site is on. These tags can then be filtered down on the main status page with the option of either viewing tests which match all selected tags or ones that contain any of selected. This gives you an easy and fast way to filter right to the results you are looking for. We’ve also added two new boxes on the all status page that show you the combined uptime for the filtered results and the average performance – but we’ve not stopped there…
As of today all our paid users can make use of bulk editing options. With bulk editing you can select tests from your main status listing (or filtered down tests using tags!) and then update common settings such as test locations, check rate, trigger rate and contact group. With this latest feature you can change all of your tests to a new setting within seconds.
We’ve also continued to make tweaks and improvements throughout the system – a huge thanks to everyone that has provided us feedback on our ideas system!
Share this
6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
Find out everything you need to know in our new uptime monitoring whitepaper 2021