Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Downtime happens, it’s a fact of running a website – but understanding and logging what caused the downtime enables you to better react in the future. It’s with this in mind that today we’re happy to announce the addition of annotation on downtime. You are now able to go to any one of your tests and click a period (be it a downtime period or uptime) and annotate that downtime. This means that you or a member of your team can quickly find out what caused each time down you have historical on your site.
To annotate a span simply go to your control panel, click a test and then click a span within the “Status Periods” section. You can then enter any information you want about this span.
Beyond being useful as a tool for internal use you can also optionally share your annotations on your public reporting page. This means you no longer need to have random spans of downtime without an explanation but rather you can inform users if it was scheduled or what caused it. You don’t have to share your annotations publicly but doing so will help build trust among your users.
It’s that simple!
Share this
6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
Find out everything you need to know in our new uptime monitoring whitepaper 2021