How to Consolidate AI Spend, Negotiate Better Commercials, and Prove Value in 90 Days
Executive context
Enterprise AI adoption is now following the same arc as early cloud: rapid experimentation, fragmented buying, overlapping capabilities, and a cost model that shifts from capital to variable consumption. The difference is that AI adds new forms of risk alongside the cost volatility: data leakage pathways, model behavior and drift, IP ambiguity, and emerging regulatory expectations for governance, transparency, and operational controls. The finance and procurement mandate is therefore no longer “buy some tools” but “run an AI portfolio” with measurable outcomes, predictable run rate, and auditable controls.
Portfolio discipline is the practical mechanism that reconciles these pressures. It treats AI as a managed estate: standard architectures, approved patterns, metered consumption, consolidated vendors, and a governance cadence that links spend to outcomes. It also provides a defensible operating baseline for risk frameworks like NIST AI RMF and for AI management system approaches aligned to ISO/IEC 42001, while staying compatible with regulatory direction such as the EU AI Act where relevant.
Why tool sprawl happens in AI, and why it is expensive in unique ways
AI sprawl is usually rational at the point of purchase. Teams buy whatever unblocks delivery: a copiloting tool for productivity, a separate transcription tool, a separate chatbot, a separate API subscription for an innovation squad, plus multiple niche vendors for document search, meeting notes, and content generation. The organisational failure is not the purchases themselves, but the absence of a portfolio layer that connects them.
AI sprawl becomes expensive for three distinct reasons:
1) Dual cost stack: seats plus consumption.
Many vendors blend per user licensing with variable usage based pricing (tokens, minutes, calls, or “credits”), which makes forecasting materially harder than standard SaaS.
2) Redundant capability, fragmented data, fragmented control.
Four tools can be doing the same job with different safety settings, different retention rules, and different auditability. This magnifies security and compliance exposure even when each individual tool looks “reasonable”.
3) Vendor lock through workflow gravity.
Lock in rarely happens through contracts. It happens through embedded prompts, proprietary connectors, fine tuned configurations, and teams building muscle memory around a single interface. Exit costs become operational, not legal.
A CFO and procurement response that focuses only on price per seat misses the real levers: unit economics, demand management, architectural standardisation, and governance. FinOps style operating models are directly relevant here because they were designed for variable spend environments and accountability across distributed teams.
The portfolio discipline model
Portfolio discipline is a set of policies and mechanisms that turn AI from ad hoc purchases into a controlled estate.
1) Define the AI portfolio taxonomy
A workable taxonomy creates comparability across vendors and use cases:
- Productivity assistants (general copilots, writing assistants)
- Knowledge and search (RAG, enterprise search, Q and A over documents)
- Customer and agent support (contact centre, internal service desk)
- Developer acceleration (coding copilots, code review, testing)
- Media generation (speech, video, image)
- Platform layer (LLM APIs, orchestration, evaluation, guardrails)
- Risk and assurance layer (model monitoring, red teaming, policy enforcement)
This forces the “two tools that do the same thing” conversation early.
2) Establish unit economics as the decision language
Replace tool level discussions with “cost per unit of value”. Examples:
- Cost per knowledge answer that meets accuracy threshold
- Cost per customer case deflected
- Cost per document summarised at agreed quality
- Cost per software ticket resolved
- Cost per 1,000 tokens for production workloads, including guardrails and monitoring
FinOps language is helpful because it frames marginal cost and allocation, enabling showback or chargeback to the teams generating demand.
3) Create a risk tiering baseline that procurement can enforce
At minimum, tier AI use by data sensitivity, operational impact, and regulatory proximity. If operating in the EU market, ensure procurement can route higher risk use cases into stronger obligations, documentation, and monitoring paths aligned to the deployer duties described in the EU AI Act’s “high risk” framing.
Independently of geography, align controls to recognised risk management structures such as NIST AI RMF (govern, map, measure, manage) and build an AI management system approach consistent with ISO/IEC 42001 for repeatability.
The 90 day consolidation and value program
This 90 day plan is designed to deliver two things simultaneously: cost control and demonstrable outcomes. It assumes an enterprise with existing AI tool sprawl.
Days 1 to 15: Build the spend and usage truth
Deliverables
- AI vendor register: every AI related contract, renewal date, commercial model, and owner
- Usage telemetry map: what can be measured today (seats, tokens, calls, minutes), and what cannot
- Data and risk posture snapshot: data types used, retention settings, and key control gaps
- Capability overlap heatmap: which tools compete in each taxonomy category
Practical rule
No consolidation decision is made without pairing spend with usage and with a minimum risk posture view.
Days 16 to 35: Define portfolio standards and target architecture
Deliverables
- Approved AI patterns: sanctioned ways to do common tasks (search over documents, summarisation, drafting, agent workflows)
- Vendor rationalisation shortlist: primary vendor per category plus exceptions process
- Commercial guardrails: approved pricing models, required telemetry, and minimum contract clauses
- “Stop list” criteria: conditions that trigger removal (no audit logs, unclear retention, inability to segregate tenants, weak DP terms)
This step reduces future sprawl by designing the default path.
Days 36 to 60: Consolidate, renegotiate, and cap the run rate
Deliverables
- Consolidation waves: quick wins first (duplicate copilots, duplicate meeting tools), then platform decisions (API providers, orchestration)
- Renegotiated commercials with three outcomes:
- reduced unit cost
- improved predictability (caps, commits, rollover)
- stronger control terms (audit, retention, change notice)
Worked example
- Current state: 3 overlapping copilots
- Vendor A: 1,500 seats at £28 per user per month = £42,000 per month
- Vendor B: 600 seats at £22 per user per month = £13,200 per month
- Vendor C: 400 seats at £30 per user per month = £12,000 per monthTotal seat run rate = £67,200 per month
- Consolidated state: 1 strategic copilot vendor with 1,700 seats at £24 per user per month = £40,800 per monthSeat savings = £26,400 per month, £316,800 annualised
Then add consumption control:
- Prior state: unmanaged usage add ons averaging £18,000 per month across teams
- Negotiated state: committed usage block at £12,000 per month with rollover plus a hard cap that forces approval above threshold
- Consumption savings = £6,000 per month, £72,000 annualised
Total annualised savings in this single category = £388,800, plus lower operational risk through standardised controls.
Days 61 to 90: Prove value with outcome metrics, not anecdotes
Deliverables
- Value scorecard with agreed definitions
- 3 to 5 production use cases instrumented end to end
- Cost allocation model (showback initially, chargeback later if needed)
- Continuous evaluation and assurance loop (quality, safety, drift)
Outcome metrics that finance can trust
- Productivity: time saved converted to capacity released, with adoption adjusted for real usage
- Revenue support: faster cycle times, improved win rates, reduced rework, with attribution factors agreed upfront
- Cost avoidance: deflection, automation rates, reduced vendor duplication
- Risk outcomes: reduction in unmanaged tools, improved logging coverage, documented model governance
Negotiation playbook: better commercials without buying risk
Price negotiations in AI fail when procurement treats it like conventional SaaS. The leverage is different. Key clauses and levers:
Commercial levers
- Unit definition: insist on clarity (per seat, per token, per call, per workflow). Avoid blended units that prevent benchmarking.
- Commits and rollover: commit discounts are acceptable when paired with rollover and visibility. Avoid “use it or lose it” unless the business case is locked.
- Hard caps and rate protection: cap overage rates and enforce approval gates when thresholds are crossed.
- Concurrency and tiering: negotiate concurrency pools and tiered access (heavy users vs light users) to prevent over licensing.
- Benchmark rights: allow periodic benchmarking or re pricing if list rates drop materially.
Control and assurance terms
- Telemetry as a contractual requirement: vendor must provide usage data at the granularity needed for showback and unit economics.
- Change control: model upgrades, policy changes, and pricing metric changes require notice and, for material changes, an opt out path.
- Data retention and training boundaries: explicit statements on whether customer data is used for training, and retention limits.
- Audit support: practical audit evidence and reporting, not marketing statements.
- Exit and portability: configuration export, prompt library export, and reasonable transition assistance.
These terms support governance approaches emphasised by NIST AI RMF and AI management system thinking in ISO/IEC 42001, translating them into procurement enforceability.
Operating model: who owns what after day 90
Portfolio discipline requires a simple operating model:
- CFO: sets financial guardrails, approves unit economics approach, owns reporting cadence
- Procurement: enforces contract standards, manages renewals as a portfolio, runs competitive tension
- CIO or CTO: owns reference architectures, platform standards, integration patterns
- CISO: owns risk tiering, logging, retention, incident pathways, assurance requirements
- Product and business owners: own outcome metrics and adoption accountability
A monthly portfolio review replaces sporadic tool renewals. Inputs: spend, usage, unit cost, outcome delivery, risk posture.
What to avoid
- Consolidating purely on price per seat while ignoring consumption and exit costs
- Allowing exceptions without sunset dates and measurement
- Rolling out “enterprise wide” licensing without demand shaping and usage evidence
- Treating AI risk as a policy document rather than an enforceable procurement and platform control set