Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint.
This post assumes that context.
The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already going wrong?
One answer, used quietly but consistently by high-performing teams, is the checklist.
Given that engineers operate under pressure, uncertainty, and incomplete information during incidents, checklists serve a very specific role.
They are decision support, not documentation.
A good checklist:
A bad checklist:
The difference isn’t intent. It’s design.
In reliability-focused teams, operational tools like checklists are treated as aids to decision-making under uncertainty. They’re not there to be used as exhaustive instructions.
Not all checklists reduce cognitive load. Some quietly increase it. In practice, the usefulness of a checklist comes down to a few constraints.
If a checklist has more than roughly 15–20 items for a single phase, it’s probably doing too much. Under pressure, long lists increase scanning time and encourage skipping. Breaking prompts into short, situational sections keeps them usable.
Organising prompts by moment, e.g. the first few minutes, active mitigation, standing down etc, mirrors how incidents actually unfold. Engineers shouldn’t have to translate process into reality while things are breaking.
Effective checklists use plain language and avoid internal jargon or shorthand. Prompts should be understandable even by someone who didn’t build the system. Questions tend to work better than commands because they encourage thinking rather than rote execution.
A checklist that never changes is a warning sign. The most useful ones evolve in response to real incidents, near-misses, and moments of hesitation.
The goal isn’t perfect coverage. It’s to provide clarity when clarity is hardest to come by.
What follows is one example of how teams reduce cognitive load during incidents. It’s not a universal template; rather a starting point to adapt to your own systems and failure modes.
The structure reflects how incidents actually feel in practice.
Before jumping to fixes, teams need to orient themselves.
Helpful prompts include:
These prompts exist to prevent teams from solving the wrong problem first.
Once oriented, the focus shifts to limiting damage. Useful prompts here include:
This phase is about resisting the urge to “do more” when clarity is lacking.
Many incidents last longer than they need to because teams aren’t sure when it’s safe to stop.
Before standing down, prompts like these help restore confidence:
This phase exists to avoid both premature confidence and unnecessary caution.
When internal dashboards are noisy, delayed, or contradictory, independent signals become especially valuable.
External monitoring helps answer a few simple but critical questions:
An outside-in signal provides a shared point of reference when internal views are inconclusive. That common ground helps teams align decisions and regain confidence.
This is where tools like StatusCake are most useful; not as a driver of incident response, but as a reliable confirmation when it matters most.
A good checklist is never finished. It evolves because:
In reliability engineering, this kind of iteration is expected. Operational practices improve by learning from real incidents, not by trying to predict every failure in advance. And every “we weren’t sure what to do next” is feedback about the system, not the individual.
That’s how checklists become leverage; they’re small refinements that compound across future incidents.
Checklists won’t prevent every incident. That’s not their job. What they do is:
They help teams respond with confidence rather than panic.
Checklists are not the goal. They’re a tool. They exist because modern systems are complex, change is fast, and humans operate under pressure.
If teams need checklists to work safely during incidents, the next question becomes:
What kind of systems, signals, and incentives reduce the need for those checklists in the first place?
That’s where we’ll explore next. We’ll zoom out from incident response to designing systems that create confidence by default.
If you’d like to explore the ideas behind checklists, human factors, and reliability engineering in more depth, the following books are excellent starting points:
Across very different domains, these works reinforce the same idea. That systems should be designed to support humans, especially when conditions are difficult.
Share this
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things
4 min read Why strong reviews, accountability, and monitoring matter more in an AI-assisted world Artificial intelligence has become the latest fault line in software development. For some teams, it’s an obvious productivity multiplier. For others, it’s viewed with suspicion. A source of low-quality code, unreviewable pull requests, and latent production risk. One concern we hear frequently goes

3 min read IPFS is a game-changer for decentralised storage and the future of the web, but it still requires active monitoring to ensure everything runs smoothly.

3 min read For any web developer, DevTools provides an irreplaceable aid to debugging code in all common browsers. Both Safari and Firefox offer great solutions in terms of developer tools, however in this post I will be talking about the highlights of the most recent features in my personal favourite browser for coding, Chrome DevTools. For something
Find out everything you need to know in our new uptime monitoring whitepaper 2021