Every major wave of information technology follows the same arc. The subject matter changes, the terminology evolves, and the vendors rebrand the promise, but the underlying organisational behaviour remains stubbornly consistent. From mainframes to personal computing, from the internet to cloud, and now with artificial intelligence, enterprises repeat the same mistakes, in the same order, with the same outcomes. The result is predictable disruption, predictable resistance, predictable waste, and eventually predictable value once enough damage has been absorbed.
This is not a failure of intelligence. It is a failure of institutional learning. Organisations tend to treat each technological shift as unprecedented, when in fact it is structurally familiar. The opportunity with AI is not simply to adopt a new toolset, but to recognise the pattern early and deliberately skip stages that previously took decades to resolve.
Stage One: Dismissal as a Fad
Every major computing shift begins with dismissal. In the 1950s and 1960s, early digital computing was viewed by many executives as an academic curiosity. Computers were large, expensive, unreliable, and relevant only to scientific research or government census work. Business leaders questioned whether machines could ever meaningfully support commercial decision making.
The same rhetoric reappeared in the late 1970s with personal computing. Senior IT leaders argued that computers belonged in data centres, not on desks. Executives dismissed early PCs as toys for hobbyists, incapable of handling serious workloads. Similar arguments resurfaced again in the early internet era, when email, websites, and online commerce were framed as novelties rather than foundations of future business models.
With AI, the pattern is identical. Early commentary labelled it a gimmick, a chatbot, or an experimental feature with no strategic relevance. Organisations reassured themselves that existing analytics, rules engines, or human expertise were sufficient. This stage is marked by passive confidence and minimal investment.
The key mistake at this stage is underestimating trajectory. The error is not misunderstanding current capability, but ignoring exponential improvement.
Stage Two: Dismissal as Impractical
Once the technology proves it can function, the narrative shifts. The objection is no longer that it does nothing, but that it does nothing useful. Mainframes were criticised for being too slow to adapt to business change. Personal computers were said to be unmanageable and insecure. The early internet was described as unreliable, untrustworthy, and unsuitable for real commerce.
This stage is where pilots appear, but only at the margins. Small teams experiment. Proofs of concept are commissioned with no intent to scale. Success is reframed as irrelevance, because it does not map neatly onto existing operating models.
AI now sits firmly in this phase for many enterprises. Models work, but leaders argue they are not accurate enough, not explainable enough, or not integrated enough to justify structural change. AI is positioned as an assistant rather than a system. The organisation protects itself by constraining scope.
The mistake here is treating structural mismatch as a flaw in the technology rather than a signal that existing processes are misaligned.
Stage Three: Bubble and Hype Anxiety
As adoption accelerates externally, internal scepticism hardens. The technology is reframed as a bubble. In the late 1990s, the internet boom became synonymous with irrational valuations and unsustainable business models. Earlier, minicomputers and departmental systems were accused of fragmenting enterprise control. Even personal computing faced backlash as costs ballooned and standards collapsed.
This stage is emotionally driven. Leaders fear being associated with waste, hype, or embarrassment. Investment freezes are justified under the banner of prudence. Risk committees grow louder. The organisation convinces itself that waiting is the rational choice.
AI is currently experiencing this phase in many regulated sectors. Executives cite overblown claims, vendor noise, and unclear returns. The technology becomes politically dangerous rather than strategically urgent.
The mistake is confusing market exuberance with technological invalidity. Bubbles distort pricing, not inevitability.
Stage Four: Safety, Risk, and Control Panic
Once the technology demonstrates real impact elsewhere, resistance pivots again. The argument becomes one of safety. Mainframes raised concerns about centralised failure. PCs raised fears of data leakage. The internet triggered moral panics around security, fraud, and reputational risk.
These concerns are not wrong. They are simply late. Risk frameworks emerge after the technology is already reshaping behaviour. Control functions react rather than design.
With AI, this stage is especially pronounced. Ethical risk, bias, hallucinations, regulatory exposure, and security vulnerabilities dominate the conversation. Governance teams rush to impose policy retroactively. Controls are bolted on rather than engineered in.
The mistake is treating risk as a reason to delay adoption rather than a design constraint to incorporate early.
Stage Five: Crisis and Competitive Realisation
Eventually, denial collapses. Competitors move faster. New entrants reshape markets. Customers expect capabilities that incumbents cannot deliver. The organisation realises it is behind the curve.
This is the crisis phase. In the 2000s, businesses that delayed internet adoption scrambled to build e commerce and digital channels. Enterprises that resisted personal computing found themselves uncompetitive in productivity and talent retention. Mainframe only organisations were overtaken by distributed systems.
AI is approaching this inflection point now. Boards are beginning to ask why costs are rising while peers are automating. Investors are questioning productivity gaps. Employees are already using AI tools outside formal systems.
The mistake here is rushing without strategy. Panic replaces prudence.
Stage Six: Chaotic and Suboptimal Implementation
Under pressure, organisations implement quickly and badly. Shadow systems proliferate. Tools are adopted without integration. Data quality issues are ignored. Governance is bypassed to achieve speed.
This stage is characterised by firefighting. Costs rise. Incidents occur. Confidence drops. The technology gains a reputation for being unreliable, when in reality the implementation is.
History is clear on this point. The early internet era was littered with failed portals and insecure systems. PC sprawl created decades of technical debt. Distributed computing created fragmentation that took years to consolidate.
AI will follow the same path unless intervention occurs earlier.
Stage Seven: Maturity, Value, and Institutional Learning
Eventually, organisations stabilise. Patterns emerge. Standards form. Operating models adapt. The technology becomes boring, which is the highest compliment in enterprise IT.
At this stage, value is well understood. Governance is embedded rather than obstructive. Talent models adjust. The technology becomes infrastructure.
The tragedy is not that this stage takes time. It is that organisations behave as though it must take the same amount of time every cycle.
How AI Allows Us to Skip Stages
AI is not the first transformative technology, but it may be the first where the pattern is widely visible in advance. We have historical evidence. We know the stages. We know the failure modes.
This creates an opportunity. Enterprises can deliberately skip stages by acting differently.
First, treat AI as a structural capability, not a tool. Every previous cycle failed where organisations tried to contain change rather than redesign around it.
Second, integrate governance from inception. Safety, ethics, and compliance are not blockers if they are designed as architecture rather than policy overlays.
Third, focus on value pathways, not experimentation theatre. Pilots should exist only where there is a clear path to scale.
Fourth, invest in operating model change early. Skills, incentives, and accountability matter more than model accuracy.
Finally, accept that uncertainty is permanent. Waiting for perfect clarity is simply a slower way to lose ground.
The Strategic Imperative
The challenge is not AI. The challenge is organisational memory. Enterprises repeatedly relearn the same lessons at enormous cost. AI offers a rare chance to break that cycle.
The organisations that succeed will not be those with the most advanced models, but those that recognise the pattern early and choose not to repeat it.
Same challenge. Different subject. The outcome is still a choice.