Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows.
One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility.
After more than fourteen years of working with engineering teams of every size and shape, we’ve seen this assumption fail repeatedly. Not because people don’t care, but because being notified is not the same thing as being responsible.
In many organisations, notification lists grow as a substitute for clarity. A name gets added because someone “might need to know”. A team is included because they were involved last time. A senior engineer is copied “just in case”.
Each decision makes sense in isolation. Over time, however, the list starts to resemble a team. But it isn’t one.
A team has shared expectations, defined roles, and a clear understanding of who does what when something goes wrong. A notification list is just a delivery mechanism. It can distribute information, but it cannot assign responsibility.
Treating the two as equivalent is where many alerting systems quietly break down.
Consider a familiar pattern.
An alert fires and is sent to multiple individuals, mailing lists, and chat channels. Everyone who might be relevant can see it. The signal is real. The information is there.
And yet, nothing happens immediately.
Some engineers begin investigating quietly. Others wait, assuming someone closer to the system or more senior will act. A few hesitate, unsure whether this is theirs to own or merely something they should be aware of.
Minutes pass.
This is often framed as a cultural problem; people aren’t proactive enough, or ownership isn’t taken seriously enough. In practice, it’s far more often a systems problem. The alert has made people aware, but it hasn’t made anyone responsible.
Notification lists create a sense of coverage.
If an incident escalates, it’s easy to say: “Everyone was notified.” From a risk perspective, that feels safer than relying on a single role or individual. But coverage and accountability are not the same thing.
When responsibility is implied rather than explicit, people fall back on social cues. They wait for signals that someone else is acting. They look for confirmation before stepping in. They avoid duplicating effort or overstepping perceived boundaries.
This isn’t apathy. It’s rational human behaviour under uncertainty. And the more people included, the stronger this effect becomes.
A subtle but important point is this. Teams don’t form at the moment an alert fires.
They exist beforehand through clear ownership, shared understanding, and agreed escalation paths. When those structures are missing, a notification list doesn’t create them. It simply exposes the gap.
Across thousands of conversations with engineering teams, we hear the same post-incident reflection again and again “Everyone saw it, but no one was sure who should act.”
That isn’t a failure of effort or care. It’s a predictable outcome of a system that distributes information without assigning responsibility.
Effective alerting systems do one thing above all else: they make responsibility explicit.
They answer, immediately and unambiguously:
Notification lists can support that design, but they cannot replace it. Without clear ownership, adding more recipients only increases hesitation, cognitive load, and delay.
In the next post, we’ll look at what it actually means to design alerts for ownership — and why being “kept in the loop” is often a signal that the wrong tool is being used.
Share this
10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about
4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,
5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t
6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
Find out everything you need to know in our new uptime monitoring whitepaper 2021