Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome.
Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one.
Alerting systems encode assumptions about how people behave, how responsibility is distributed, and how decisions are made under pressure. When those assumptions don’t match reality, the system fails; even if the underlying technology works exactly as designed.
Every alerting setup reflects a set of beliefs, whether they’re explicit or not.
Who gets alerted implies who is trusted to act.
How many people are notified implies how confident the organisation is in its ownership model.
What information is included implies what the system believes matters in the moment.
When alerts are broadcast widely, the system is effectively saying: we’re not sure who should handle this. When alerts contain excessive context, it’s often because the system doesn’t trust the recipient to know what to do without it.
These are not technical shortcomings. They are organisational signals.
Many alerting systems evolve as a response to uncertainty.
A missed incident leads to wider notifications.
A slow response leads to more context.
An unclear handoff leads to additional escalation paths.
Each change is rational in isolation. Over time, however, the system begins compensating for gaps elsewhere, whether unclear ownership, fragile trust, or ambiguous expectations, rather than addressing them directly.
The result is a system that interrupts many people, moves slowly, and produces stress without improving outcomes.
It’s tempting to treat alerting problems as tooling problems. Adjust thresholds. Add integrations. Create more rules.
Those changes can help at the margins, but they don’t resolve the underlying issue if the design assumptions remain unexamined.
No amount of configuration can make up for uncertainty about who owns a problem.
No escalation policy can substitute for clarity about what action is expected. And no alert volume reduction will help if alerts are being used to achieve outcomes they weren’t designed for.
The hard work here isn’t technical. It’s conceptual.
Good alerting systems are designed with a clear understanding of how people actually behave during incidents.
They assume limited attention, incomplete information, and hesitation under ambiguity. They reduce the need for interpretation and make the next step obvious.
When an alert fires, the recipient shouldn’t need to ask:
If those questions arise, the system has already failed them.
Over time, mature organisations tend to converge on similar alerting characteristics.
Alerts are fewer, but more trusted.
Ownership is explicit, not implied.
Visibility is provided through the right channels, not forced through alerts.
And responsibility is designed into the system, rather than inferred socially.
This isn’t about perfection. It’s about alignment between technology, process, and human behaviour.
Alerting systems work best when they reinforce clarity, not compensate for its absence.
Alert noise isn’t an accident. Notification lists aren’t teams. And alerts without outcomes are just interruptions.
Seen together, these aren’t isolated problems. They’re symptoms of systems that haven’t been designed with humans in mind.
Treating alerting as a socio-technical system doesn’t make the problem simpler; but it does make it solvable. Because once design assumptions are visible, they can be challenged, refined, and improved.
And when that happens, alerting stops being a source of stress, and starts being what it was meant to be all along: a system that helps people act, clearly and confidently, when it matters most.
Share this
7 min read Artificial intelligence is making software easier to produce. That much is already obvious. Code that once took hours to scaffold can now be drafted in minutes. Boilerplate, integration logic, tests, refactors and small internal tools can be generated with startling speed. In some cases, even substantial pieces of implementation can be assembled quickly enough to
10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about
4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,
5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t
6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
Find out everything you need to know in our new uptime monitoring whitepaper 2021