Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it.
There is a now-familiar story about AI and software delivery. It claims that the old lifecycle is collapsing. That requirements blur into implementation, testing arrives alongside code; and that review becomes lighter, rarer, or quietly machine-assisted. Deployment becomes continuous by default. The neat sequence that once framed software development starts to look more like a single loop of intent, generation, execution, and observation.
There is some truth to this. Software is easier to produce than it was even a few years ago. Internal tools that would once have sat in a backlog can now be prototyped in days. Workflows that used to require a careful sequence of engineering effort, procurement, and compromise can be stitched together by a small team with a model, some APIs and a decent prompt. The first version appears sooner, the demo happens earlier, and the distance between “we should build this” and “here it is” has collapsed.
From the keyboard, it can look as though the old software development lifecycle is fading from view. But there is a difference between a lifecycle becoming less visible and a lifecycle disappearing.
That difference matters because the software development lifecycle was never just a set of developer rituals. It was also one of the ways organisations made software governable. It spread understanding, enforced accountability, and created evidence. Although it slowed things down in all the wrong ways sometimes, it also carried more than just “delay.” It carried challenge, traceability, and trust.
That is the mistake in much of the current discourse. It assumes that because some visible stages of the software lifecycle are compressing, the responsibilities those stages carry are disappearing with them. They are not. They are being hidden, redistributed, automated, deferred, or pushed downstream into production. This is where the more serious conversation needs to begin.
Inside an editor, a lot of the old boundaries do look softer than they used to. An engineer no longer has to move through a clean sequence of design, implementation, testing and review in the same way. A model can generate code, propose tests, explain a failure, revise logic and prepare changes faster than many teams once moved through a single review cycle.
That changes how software feels to build; but building software was never the whole problem. The more consequential question has always been whether the organisation can understand, validate, operate, and recover that software once it matters.
So whilst a generated test may reduce the visible work of testing, it does not remove the need for validation. Similarly a disappearing pull request may reduce the visible work of review, but it does not remove the need for challenge. And though continuous deployment may reduce the visible work of release coordination, it does not remove the need for change control, rollback confidence, or auditability.
The visible work changes first – the underlying obligations remain. This is why the rhetoric of collapse feels both exciting and incomplete. It captures the compression of activity without asking where the burden of assurance has gone.
And that burden always goes somewhere whether it’s into automation; policy; runtime checks; observability; incident response; or into the minds of the few engineers still able to explain what the system is doing.
That is not the death of the software lifecycle, but rather the relocation of it.
This becomes more obvious the moment you stop looking at software solely from the perspective of delivery speed.
Many organisations are now telling two stories at once. One is a story of acceleration. Engineering teams using AI more for faster delivery with less friction. Freeing up more internal capability, and giving fewer reasons to wait, buy, or accept old engineering constraints.
The other is a story of control. Better governance. Stronger information security. Greater resilience. More evidence. More confidence in how systems behave, how changes are made, and how risk is managed.
Both stories make sense independently, but together, they create tension.
For years, boards and senior leadership teams have pushed organisations to build stronger control environments around software and data. Not because they love process, but because weak controls turned out to be expensive. Security failures, poor change governance, weak ownership, patchy audit trails, and fragile incident response are not abstract concerns. They damage trust, raise costs, and very quickly become strategic problems.
That is why standards such as ISO/IEC 27001 exist. ISO describes it as an information security management standard built around risk management and a holistic approach spanning people, policy and technology. SOC 2 is likewise about the design and operating effectiveness of controls relevant to security, availability, processing integrity, confidentiality and privacy.
So there is a real contradiction sitting underneath some of the louder AI claims. If organisations still believe that strong controls, clear accountability and governed change matter, at what point does the “go faster, remove friction, let agents handle more of the flow” posture begin to cut across the very policies they have spent years putting in place?
That is not an anti-AI question. It is a governance question, and it does not disappear because the code was generated by AI rather than being handwritten. If anything, the rise of AI should make boards ask harder questions, not softer ones. If code, tests, deployment flows and release decisions are increasingly assisted by models, then leaders need clearer answers on how those systems are being controlled, evidenced and monitored.
What now counts as review?
What counts as approval?
What is the escalation path when an AI-assisted workflow touches something sensitive?
Where is the audit trail?
Who can explain the system to a regulator, customer, or incident team when something goes wrong?
These are not the questions of a bygone lifecycle. They are the same questions the old lifecycle was trying, imperfectly, to answer.
A lot of AI delivery rhetoric quietly assumes that software can be validated the same way modern consumer products often are: ship quickly, observe carefully, and recover fast.
Sometimes that is true. Other times it is already too late. There are domains where “we’ll catch it in production” is not a serious operating model. Autonomous driving, medical devices, and other safety-critical systems do not merely ask whether software works most of the time. They ask whether organisations can demonstrate safety, traceability, controlled change and clear accountability before that software is allowed to affect the real world.
That is why automotive functional safety has its own standards architecture in ISO 26262, covering management of functional safety, hazard analysis, software-level development and production and operation considerations. The same logic appears in medical technology. The FDA’s Total Product Life Cycle approach explicitly frames safety and effectiveness as concerns that run from design and development through real-world use of a device.
That is the part of the current debate that often feels underplayed. It is one thing to argue that some internal tools, low-risk workflows, or routine product changes can be observed and corrected quickly in production. It is another to imply that this logic scales cleanly into software whose failure can affect safety, health, or critical infrastructure.
In those environments, runtime monitoring remains essential, but it is a backstop, not a substitute for prior assurance. Observability can help teams detect drift, surface anomalies and shorten recovery but it cannot retroactively make an unsafe release responsible.
This is where the accountability question becomes much harder to avoid. If an organisation knows a system is safety-critical, knows that AI-assisted delivery can make provenance, review and intent less visible, and still removes meaningful human oversight without building an equivalent control regime, then the problem is no longer just technical. It becomes a question of governance judgment.
That is why the more useful question is not “how much human review can we eliminate?” but “where does human accountability still have to remain explicit?” Not because humans are infallible, but because responsibility does not disappear when a pull request is automated. If anything, it becomes more important to know who is prepared to stand behind the system once the visible checkpoints are gone.
The standards world is already moving in that direction. ISO/IEC 42001 is now positioned by ISO as the first AI management system standard, intended to help organisations providing or using AI products and services manage AI responsibly. NIST’s AI Risk Management Framework is similarly aimed at helping organisations manage AI risks and support trustworthy development and use. And the EU AI Act takes an explicitly risk-based approach to AI systems rather than assuming all AI-enabled development can be governed in the same way.
That is a better guide than most of the louder rhetoric. In this environment not every system is allowed to fail safely, not every organisation is free to treat production as the primary place where learning occurs; and not every collapse of visible stages is evidence of maturity. Sometimes it is just the moment accountability becomes harder to see.
Part of the reason this conversation is so muddled is that people often remember the old lifecycle only in terms of delay. Requirements, reviews, testing, the release process, and coordination were all perceived to slow things down. That memory is not wrong. Plenty of software delivery did become bloated with needless ceremony; but some of that apparent inefficiency was also where understanding came from.
The slower movement from idea to production used to do more than produce code. It forced repeated contact with the system. Whilst people reviewed the change they also absorbed what it was supposed to do. They challenged assumptions, surfaced dependencies, and turned local knowledge into team knowledge. This sometimes happened imperfectly or accidentally, but often effectively enough that when a service later misbehaved, more than one person had a usable mental model of it.
This matters because one of the less discussed effects of AI is not just that it accelerates generation. It can also compress the period in which shared understanding would once have formed. Software can now arrive in the organisation before the organisation has really learned it. That is a subtle shift, but an important one.
A system can be functional before it is socially known. It can exist in production before it exists in the wider understanding of the team responsible for it. It can become operationally important before it becomes collectively legible. When that happens, the organisation has not actually removed complexity; but has deferred it. That deferred complexity has a habit of reappearing during incidents.
Production has always been where software is tested against reality, but in the AI era it also becomes the place where hidden responsibilities reappear.
A system can appear to have skipped design discipline right up until the moment nobody can explain its behaviour. It can appear to have skipped testing right up until production becomes the place where edge cases are discovered. It can appear to have skipped review right up until an incident forces a team to reconstruct what changed, why it changed, and which assumptions no one challenged beforehand. And it can appear to have skipped release governance right up until rollback becomes the only thing standing between a bad change and a widening outage.
In many classes of software, hidden responsibilities return in production. In safety-critical systems, that is exactly what organisations are supposed to prevent. At that point, the lifecycle has not vanished; rather it has returned in less forgiving form.
This is why the current moment matters so much for reliability and operations teams. If more of the old discipline is being hidden in tooling or deferred into runtime, then production is carrying more responsibility than it used to. Runtime feedback is no longer just a way to observe the system after the interesting work is done. It’s increasingly becoming part of how systems are validated, understood and contained.
That raises the stakes for monitoring and observability. Because when the visible path to production gets shorter, runtime truth becomes one of the few remaining places where organisations can still recover clarity quickly enough to act well. Good monitoring does not just tell you that something is wrong; it also helps preserve legibility in an environment where systems are changing faster than people can casually narrate them to one another. It provides a version of reality outside the assumptions, memories and interpretations of individual engineers. And it becomes a source of continuity when local context is incomplete.
That is why the current conversation should not be framed as “the old stages are dead, and now monitoring is all that matters.”
The better framing is that more of the old lifecycle’s burden is showing up in production, and the systems that make production intelligible therefore become more important than before.
There is another problem with the way this debate is being framed. A lot of the loudest claims are coming from frontier environments, such as large tech businesses with unusually advanced engineering teams, or vendors with something to sell, and a familiar class of very online operator who treats every directional shift as an immediate operating model for everyone else.
That does not make those claims false but it does make them dangerous when they travel uncritically into ordinary organisations. There is a difference between what a highly mature engineering organisation can compress safely and what a typical business can absorb.
A company with strong observability, disciplined incident practices, robust internal platforms, experienced staff, clear service ownership and well-understood rollback mechanisms can remove visible stages more safely than a business still struggling with basic engineering consistency. What looks like liberation in one environment may be recklessness in another.
This is where AI FOMO starts to matter. The real risk may not be that every organisation genuinely collapses its lifecycle, but that it may be that many begin to imitate the language and posture of collapse before they have built the systems needed to survive it.
For insurance they may end up weakening human review without replacing it with trustworthy validation; they may increase the rate of change without improving their ability to detect and contain failure; or they may adopt agentic workflows without clear control boundaries.
They speak as if software stages are obsolete even while their security teams, auditors and on-call engineers are still living in the consequences of weak ownership and incomplete visibility. In other words, they perform maturity they have not yet earned. That is a much more believable risk than some grand, universal collapse of software development as we knew it. And most importantly it is one that senior engineers should be far more willing to name.
The most useful response to this moment is not to defend every old stage of the lifecycle, but nor is it to celebrate their disappearance.
It is to ask harder questions about what now carries the assurance they once provided:
These questions are not obstacles to progress, but are the foundations upon which serious organisations stop progress from becoming unpriced risk.
This is where senior engineers, SRE leaders and strategic executives can add the most value right now. Not by taking sides in a shallow debate about whether the lifecycle is dead, but by insisting that whatever becomes less visible still has to become trustworthy.
AI has changed many things from the cost and speed of software production, how quickly systems can move from idea to implementation, to how much visible effort sits between a problem and a running service.
What it has not changed is the need for software to be understood, controlled and trusted. Nor has it changed the need for systems to be monitored from outside the blast radius, or for the need for organisations to know what changed, why it changed, who approved it, and how to recover if it goes wrong. And most important it has not changed the expectation that systems handling important data or customer journeys remain secure, available and explainable.
Given this the software development lifecycle is clearly not over. If anything it’s just becoming harder to see. This may be the most important insight that teams can take away right now. Organisations need to avoid mistaking invisibility for absence. Just because they stop seeing the responsibilities that used to sit inside obvious stages does not mean that the underlying need for validation, control, understanding, and recovery has somehow been solved. It has not. It has just moved.
The companies that handle this transition well will not be the ones that merely remove the most visible steps. They will be the ones that can explain where those responsibilities went, how they are now being discharged, and why the organisation can still trust the outcome.
Share this
10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about
4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,
5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t
6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
Find out everything you need to know in our new uptime monitoring whitepaper 2021