Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Why strong reviews, accountability, and monitoring matter more in an AI-assisted world
Artificial intelligence has become the latest fault line in software development. For some teams, it’s an obvious productivity multiplier. For others, it’s viewed with suspicion. A source of low-quality code, unreviewable pull requests, and latent production risk.
One concern we hear frequently goes something like this:
“AI will flood our codebase with so much low-quality output that meaningful review becomes impossible.”
It’s an understandable fear; and also the wrong conclusion. The reality is far simpler, and a little more uncomfortable. If AI makes your engineering process unmanageable, the process was already fragile.
Yes, AI can generate a lot of code quickly. But volume has never been a valid excuse for bypassing review.
A pull request containing dozens or hundreds of files is unreviewable whether it was written by:
Healthy teams already know this. That’s why effective PR processes emphasise:
AI doesn’t invalidate these principles. If anything, it reinforces them.
Used well, AI doesn’t just change how code is written. It changes how change itself is proposed.
The teams seeing the most benefit aren’t asking AI for complete solutions. They’re using it to explore options, tighten intent, and reduce noise before a change ever reaches review. The result isn’t that they’re generating more code. Instead it’s clearer diffs, better explanations, and fewer surprises downstream.
The point isn’t how you prompt a model. It’s that AI doesn’t remove the need for discipline; it makes the absence of it painfully obvious.
Another common objection is that AI produces poor-quality code.
Sometimes it does. Sometimes humans do too.
But if low-quality code is making its way into production, the real issue isn’t how the code was generated — it’s that the review process isn’t doing its job.
Pull requests should already be reviewed for:
Those expectations don’t change because AI was involved. The bar stays exactly where it was.
If reviews are rushed, superficial, or treated as a checkbox exercise, AI simply accelerates the consequences of that behaviour. It doesn’t create them.
A more subtle concern is that engineers may submit AI-generated code they don’t fully understand.
But this isn’t a new problem.
For years, developers have copied code from internal wikis, vendor documentation, blog posts, or sites like Stack Overflow without fully internalising it. The risk was never the source. It was shipping changes without understanding their implications.
The solution then wasn’t to ban documentation, search engines, or shared knowledge. It was to enforce accountability.
If you submit a change and can’t explain it, it shouldn’t ship.
AI doesn’t remove responsibility. It makes it impossible to hide from it.
AI can suggest outdated or vulnerable dependencies. So can humans.
That’s why dependency changes should always be treated as higher-risk events:
Good teams already pair human review with automated scanning to catch issues early. This isn’t an AI problem; rather a governance problem that predates modern tooling.
At StatusCake, we monitor systems at scale, which gives us a clear view into what actually causes downtime when changes hit production; regardless of who or what wrote the code.
And what we see, consistently, is that outages are rarely about tools. They’re about process gaps. Changes that bypass review, dependencies that aren’t scrutinised, or assumptions that aren’t validated until users feel the impact.
Downtime doesn’t care whether code was written by a human or an AI. Production systems only respond to what actually changed; not how confident someone felt when merging it.
Monitoring exists for a simple reason. Every system eventually behaves in ways its creators didn’t anticipate.
AI doesn’t introduce a new category of failure. It increases speed. And speed without discipline has always been risky.
AI acts as an amplifier. It magnifies whatever already exists in your engineering organisation.
If teams are misaligned, AI helps them build the wrong thing faster.
If engineers lack context, AI produces solutions that are technically elegant but operationally irrelevant.
If review standards are weak, AI accelerates the accumulation of hidden risk.
Teams with strong engineering fundamentals; those with clear ownership, disciplined reviews, and meaningful observability tend to benefit most from AI. Teams without those foundations don’t fail because of AI. They fail because AI exposes what was already missing.
The question isn’t whether AI should be embraced.
The real question is:
Are your processes strong enough to handle faster change?
At StatusCake, this isn’t just a theoretical position. We apply the same expectations internally that we’re arguing for here.
When AI is used in our own work, it doesn’t lower the bar. Its use is expected to be declared, reviewed, and held to the same standards of understanding, quality, and accountability as any other change.
The bar for production systems shouldn’t move just because the tools do; and we hold ourselves to that standard as well.
For teams interested in how we approach this in practice, we’ve published our AI usage principles here:
→ Read StatusCake’s AI & LLM Usage Policy
And if you’re interested in a deeper discussion of AI as an amplifier in engineering organisations, this is a theme we’ve explored in more depth in a recent LeadDev Berlin talk:
→ Watch the LeadDev Berlin talk
Share this
7 min read Artificial intelligence is making software easier to produce. That much is already obvious. Code that once took hours to scaffold can now be drafted in minutes. Boilerplate, integration logic, tests, refactors and small internal tools can be generated with startling speed. In some cases, even substantial pieces of implementation can be assembled quickly enough to
10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about
4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,
5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t
6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
Find out everything you need to know in our new uptime monitoring whitepaper 2021