The BSI (British Standards Institution) has issued a stark warning: many organisations are embracing AI at pace — yet far too few have implemented adequate governance frameworks.
The State of Play: Rapid AI Investment, Weak Governance
A recent BSI–commissioned global study — combining an AI-assisted analysis of over 100 multinational annual reports with two surveys of over 850 senior executives — reveals a worrying disconnect. While 62 % of business leaders expect to increase AI investment within the next year, motivated primarily by potential gains in productivity (61%) and cost reduction (49%), the majority view AI as critical to their future growth (59%).
Despite this enthusiasm, only 24% of organisations report having a formal AI governance programme in place. Even among large enterprises (250+ employees), this figure rises only modestly to 34%.
Additional signals reinforce the governance shortfall:
- Only 47% say AI usage is controlled by formal processes (though this has increased from 15 % earlier in 2025).
- Only about one in three (34%) employ voluntary codes of practice.
- A mere 22% of businesses restrict employee usage of unauthorised AI tools.
On matters of data governance — a critical component for ethical and reliable AI deployment — only 28% of leaders know the data sources used to train or deploy their AI tools (down from 35% six months earlier). Only 40% report having formal processes for handling confidential data used in AI training.
The consequences extend beyond compliance concerns. According to BSI, nearly a third of executives (32%) report that AI has already introduced new risks or weaknesses to their business.
What This “Governance Gap” Means for Enterprises
Strategic Risk: Overconfidence without Guardrails
High-level enthusiasm for AI’s transformative potential is understandable — but without governance, organisations risk serious vulnerabilities including:
- Operational failures (e.g., untested AI tools deployed without proper oversight) — only 33% have a standard process for introducing new AI tools.
- Undetected or unmanaged risk exposure — just 30% conduct formal risk assessments before deploying AI solutions.
- Ineffective incident management — only 29–32% have processes for logging issues, managing errors, or ensuring timely incident response if AI tools fail.
Reputational Risk & Regulatory Pressure
As awareness of AI risks grows, stakeholders — investors, customers, regulators — will demand accountability. Diverging governance standards across organisations and markets create a landscape where some firms could suffer reputational damage, even systemic risk, should an AI-related issue arise.
Lost Value & Hidden Costs
Interestingly, BSI’s research notes that 43% of executives feel AI investment has diverted resources from other critical projects. Without proper governance to ensure efficiency, reliability and clarity, organisations may see little of the promised productivity gains — while still bearing the full cost.
Emerging Standards and the Path Toward Responsible AI
The picture isn’t entirely bleak. In 2025, BSI and international standards bodies have taken important steps to build frameworks to support safer, more reliable AI governance:
- The introduction of ISO/IEC 42001 — the first international standard for AI management systems — provides guidance on policies, processes, risk management and ongoing monitoring for organisations deploying AI.
- Complementing that, a new standard ISO/IEC 42006:2025 sets out requirements for bodies performing audits of AI management systems, addressing previous inconsistencies across the growing AI auditing market.
- By clarifying auditor competence, methodology, independence and impartiality, ISO/IEC 42006 helps avoid the “wild west” scenario of unchecked providers and variable audit rigour.
These developments suggest a maturing market for AI assurance — which in turn implies that early adopters who proactively adopt governance standards may gain competitive advantage and future-proof compliance readiness.
What Enterprise Leaders (CIO / CTO / CISO) Should Do Immediately
- Conduct an AI-diagnostic audit — Establish a baseline: what AI tools are being used (formal and “shadow AI”), where, by whom, and for what purpose.
- Design and implement a formal AI governance programme — Define policies for AI adoption, risk assessment workflows, approval processes, monitoring, and incident response. Align with ISO/IEC 42001 as a foundation.
- Engage or certify against audit standards — If your organisation audits its AI practices (internally or through third-party vendors), ensure alignment with ISO/IEC 42006 to guarantee audit rigor, impartiality, and replicability.
- Incorporate data governance and transparency — Document data sources for model training or deployment, control use of confidential data, and build auditable trails.
- Invest in human capital and training — Balance tool adoption with upskilling staff: human oversight, ethical awareness, and process alignment remain critical — especially as AI systems become more autonomous.
Why This Matters for Strategic AI Adoption
Unchecked AI investment offers short-term gains, but without governance, it invites long-term risk: operational failures, data breaches, compliance liabilities, and reputational damage. As regulatory scrutiny rises and stakeholders demand transparency and accountability, the cost of “sleepwalking into AI” is likely to grow — potentially far outweighing any near-term gains.
For organisations that treat AI as a strategic capability rather than a tactical tool, governance becomes not just a compliance burden, but a competitive differentiator: lowering risk, building trust, enabling scalability, and unlocking sustainable value.
Partnering with a consultancy such as Strategic AI Guidance Ltd can help organisations design and implement AI governance programmes tailored to their operational context, align with international standards, and ensure the right balance between innovation and control.