Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear.
There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce.
When teams don’t explicitly decide what they want to happen as a result of a signal, they default to the loudest option available. Over time, that choice creates noise, confusion, and disengagement; even when the underlying intent is reasonable.
A useful question to ask of any alert is simple:
What do we expect someone to do when they receive this?
If the answer is unclear, or if the answer is “nothing, really”, then what you’re designing is not an alert. It’s something else, visibility, reassurance, reporting, or record-keeping; and treating it as an alert will eventually undermine all of those goals.
After many years of working with engineering teams, one pattern shows up again and again. Alerts are often used to solve problems they were never designed to address.
It’s common to see alerts sent to managers, directors, or wider stakeholder groups “to keep them in the loop”.
The motivation is understandable. People want to know when there’s been downtime, when customers might be affected, or when something went wrong overnight. But if the recipient is not expected to act, then an alert is the wrong tool.
Repeated exposure to signals that don’t require action teaches people a very specific lesson; that this can be safely ignored.
At first, they skim. Then they mute. Eventually, they unsubscribe or mentally filter the message entirely. At that point, even genuinely important signals struggle to cut through.
The failure here isn’t one of discipline. It’s a mismatch between the signal and the outcome it’s trying to achieve.
High-performing teams tend to make a clear distinction between different kinds of information, based on what they want to happen next.
Some signals exist to prompt immediate action.
Others exist to provide situational awareness.
Others exist to support reflection and learning over time.
Alerts are appropriate for the first category only.
If no action is expected, then interruption is a cost with no corresponding benefit. In those cases, mechanisms like status pages, dashboards, or periodic reports are usually far more effective. They provide visibility without demanding attention, and they build trust rather than fatigue.
Clarity here doesn’t reduce transparency. It improves it.
When alerts are explicitly designed for action, several things become easier.
Ownership becomes clearer, because someone is expected to respond. Content becomes sharper, because only information relevant to that response is included. And escalation paths become simpler, because the system knows what should happen next if nothing happens.
Crucially, alerts designed this way don’t need to go to many people. They need to go to the right one. That can feel risky at first, especially for teams used to broadcasting widely. Over time, it creates calmer responses, faster decisions, and far less noise.
The system stops asking people to interpret intent, and starts supporting them in acting.
None of this is about using the “right” alerting product or platform. It’s about being disciplined in how signals are designed.
Before adding a new alert, it’s worth pausing to ask:
If those questions can’t be answered clearly, then the signal probably shouldn’t be an alert.
Designing for outcomes doesn’t just reduce noise. It restores trust in the signals that remain.
In the final post of this series, we’ll bring these ideas together and look at alerting as a socio-technical system. This is a system that encodes assumptions about responsibility, confidence, and how people behave under pressure.
Share this
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things
Find out everything you need to know in our new uptime monitoring whitepaper 2021