
Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



For many of our users reacting to downtime data sent from a monitoring system such as StatusCake is a 24 hour job, and the process incorporates many staff who will have varying responsibilities, and sometimes work quite different hours. This is particularly true of companies who run “follow-the-sun” with their global dev-ops teams picking up the baton from the last as their time zone starts its working day.
For this reason, it can be very useful to have a method for splitting out which team alerts go to at different times. Today we’ll take you through a method for using our Maintenance Windows feature in this way.
For example, let’s first use a scenario where you have two separate teams:
To achieve this you would set-up two tests in StatusCake; each test would be identical to the other, for instance in relation to interval check rate, confirmation servers and so on, however, there would be two differences.
This ensures that when the site goes down it will only alert the team on call. You can, of course, add as many teams to this following the same set-up process – e.g. for three teams add a third test and set the Contact Group and Maintenance Window according.
Once you’ve everything up you will have an on call schedule as shown in the diagram below:
We already have this use-case working for quite a few of our customers who don’t want to use additional third-party integrations to handle alert scheduling. If you have any questions about this use-case, or indeed have any great use-cases of your own that you’d like to share with us then please let us know.
Share this

3 min read The allure of OpenClaw is undeniable. You deploy a highly autonomous, self-hosted AI agent, give it access to your repositories and inboxes, and watch it reason through complex workflows while you sleep. It is the dream of the ultimate 10x developer tool realized. But as any veteran DevOps engineer will tell you: running an LLM-backed
7 min read There are cloud outages, and then there are us-east-1 outages. That distinction matters because failures in AWS’s Northern Virginia region rarely feel like ordinary regional incidents. They tend instead to expose something larger and more uncomfortable: too much of the modern internet still behaves as though one place is an acceptable concentration point for infrastructure,
7 min read Artificial intelligence is making software easier to produce. That much is already obvious. Code that once took hours to scaffold can now be drafted in minutes. Boilerplate, integration logic, tests, refactors and small internal tools can be generated with startling speed. In some cases, even substantial pieces of implementation can be assembled quickly enough to
10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about
4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,
5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t
Find out everything you need to know in our new uptime monitoring whitepaper 2021