Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021





Why strong reviews, accountability, and monitoring matter more in an AI-assisted world
Artificial intelligence has become the latest fault line in software development. For some teams, it’s an obvious productivity multiplier. For others, it’s viewed with suspicion. A source of low-quality code, unreviewable pull requests, and latent production risk.
One concern we hear frequently goes something like this:
“AI will flood our codebase with so much low-quality output that meaningful review becomes impossible.”
It’s an understandable fear; and also the wrong conclusion. The reality is far simpler, and a little more uncomfortable. If AI makes your engineering process unmanageable, the process was already fragile.
Yes, AI can generate a lot of code quickly. But volume has never been a valid excuse for bypassing review.
A pull request containing dozens or hundreds of files is unreviewable whether it was written by:
Healthy teams already know this. That’s why effective PR processes emphasise:
AI doesn’t invalidate these principles. If anything, it reinforces them.
Used well, AI doesn’t just change how code is written. It changes how change itself is proposed.
The teams seeing the most benefit aren’t asking AI for complete solutions. They’re using it to explore options, tighten intent, and reduce noise before a change ever reaches review. The result isn’t that they’re generating more code. Instead it’s clearer diffs, better explanations, and fewer surprises downstream.
The point isn’t how you prompt a model. It’s that AI doesn’t remove the need for discipline; it makes the absence of it painfully obvious.
Another common objection is that AI produces poor-quality code.
Sometimes it does. Sometimes humans do too.
But if low-quality code is making its way into production, the real issue isn’t how the code was generated — it’s that the review process isn’t doing its job.
Pull requests should already be reviewed for:
Those expectations don’t change because AI was involved. The bar stays exactly where it was.
If reviews are rushed, superficial, or treated as a checkbox exercise, AI simply accelerates the consequences of that behaviour. It doesn’t create them.
A more subtle concern is that engineers may submit AI-generated code they don’t fully understand.
But this isn’t a new problem.
For years, developers have copied code from internal wikis, vendor documentation, blog posts, or sites like Stack Overflow without fully internalising it. The risk was never the source. It was shipping changes without understanding their implications.
The solution then wasn’t to ban documentation, search engines, or shared knowledge. It was to enforce accountability.
If you submit a change and can’t explain it, it shouldn’t ship.
AI doesn’t remove responsibility. It makes it impossible to hide from it.
AI can suggest outdated or vulnerable dependencies. So can humans.
That’s why dependency changes should always be treated as higher-risk events:
Good teams already pair human review with automated scanning to catch issues early. This isn’t an AI problem; rather a governance problem that predates modern tooling.
At StatusCake, we monitor systems at scale, which gives us a clear view into what actually causes downtime when changes hit production; regardless of who or what wrote the code.
And what we see, consistently, is that outages are rarely about tools. They’re about process gaps. Changes that bypass review, dependencies that aren’t scrutinised, or assumptions that aren’t validated until users feel the impact.
Downtime doesn’t care whether code was written by a human or an AI. Production systems only respond to what actually changed; not how confident someone felt when merging it.
Monitoring exists for a simple reason. Every system eventually behaves in ways its creators didn’t anticipate.
AI doesn’t introduce a new category of failure. It increases speed. And speed without discipline has always been risky.
AI acts as an amplifier. It magnifies whatever already exists in your engineering organisation.
If teams are misaligned, AI helps them build the wrong thing faster.
If engineers lack context, AI produces solutions that are technically elegant but operationally irrelevant.
If review standards are weak, AI accelerates the accumulation of hidden risk.
Teams with strong engineering fundamentals; those with clear ownership, disciplined reviews, and meaningful observability tend to benefit most from AI. Teams without those foundations don’t fail because of AI. They fail because AI exposes what was already missing.
The question isn’t whether AI should be embraced.
The real question is:
Are your processes strong enough to handle faster change?
At StatusCake, this isn’t just a theoretical position. We apply the same expectations internally that we’re arguing for here.
When AI is used in our own work, it doesn’t lower the bar. Its use is expected to be declared, reviewed, and held to the same standards of understanding, quality, and accountability as any other change.
The bar for production systems shouldn’t move just because the tools do; and we hold ourselves to that standard as well.
For teams interested in how we approach this in practice, we’ve published our AI usage principles here:
→ Read StatusCake’s AI & LLM Usage Policy
And if you’re interested in a deeper discussion of AI as an amplifier in engineering organisations, this is a theme we’ve explored in more depth in a recent LeadDev Berlin talk:
→ Watch the LeadDev Berlin talk
Share this
3 min read Why strong reviews, accountability, and monitoring matter more in an AI-assisted world Artificial intelligence has become the latest fault line in software development. For some teams, it’s an obvious productivity multiplier. For others, it’s viewed with suspicion. A source of low-quality code, unreviewable pull requests, and latent production risk. One concern we hear frequently goes

3 min read IPFS is a game-changer for decentralised storage and the future of the web, but it still requires active monitoring to ensure everything runs smoothly.

3 min read For any web developer, DevTools provides an irreplaceable aid to debugging code in all common browsers. Both Safari and Firefox offer great solutions in terms of developer tools, however in this post I will be talking about the highlights of the most recent features in my personal favourite browser for coding, Chrome DevTools. For something

6 min read There has certainly been a trend recently of using animations to elevate user interfaces and improve user experiences, and the more subtle versions of these are known as micro animations. Micro animations are an understated way of adding a little bit of fun to everyday user interactions such as hovering over a link, or clicking

2 min read Read about the latest websites that have experienced downtime including Netflix, Twitter, Facebook and more inside!

2 min read Read about how Google suffered an outage due to the soaring temperatures in the UK in July and how they rectified it right here!
Find out everything you need to know in our new uptime monitoring whitepaper 2021