Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



For many of our users reacting to downtime data sent from a monitoring system such as StatusCake is a 24 hour job, and the process incorporates many staff who will have varying responsibilities, and sometimes work quite different hours. This is particularly true of companies who run “follow-the-sun” with their global dev-ops teams picking up the baton from the last as their time zone starts its working day.
For this reason, it can be very useful to have a method for splitting out which team alerts go to at different times. Today we’ll take you through a method for using our Maintenance Windows feature in this way.
For example, let’s first use a scenario where you have two separate teams:
To achieve this you would set-up two tests in StatusCake; each test would be identical to the other, for instance in relation to interval check rate, confirmation servers and so on, however, there would be two differences.
This ensures that when the site goes down it will only alert the team on call. You can, of course, add as many teams to this following the same set-up process – e.g. for three teams add a third test and set the Contact Group and Maintenance Window according.
Once you’ve everything up you will have an on call schedule as shown in the diagram below:
We already have this use-case working for quite a few of our customers who don’t want to use additional third-party integrations to handle alert scheduling. If you have any questions about this use-case, or indeed have any great use-cases of your own that you’d like to share with us then please let us know.
Share this
10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about
4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,
5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t
6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
Find out everything you need to know in our new uptime monitoring whitepaper 2021