How to Buy GenAI Without Lock In, Hidden Usage Costs, or Uninsurable Compliance Risk
GenAI procurement fails in predictable ways because most organisations try to buy it like SaaS: fixed licence, generic data clauses, and a security schedule that assumes stable, deterministic software. GenAI is not that. It is stochastic, usage-metered, supply-chain heavy (models, hosting, tools, plug-ins, sub-processors), and increasingly regulated. If you contract it like yesterday’s software, you get three outcomes: lock in you cannot unwind, unit economics you cannot forecast, and compliance exposure your insurer will not price with confidence.
This playbook is a procurement-first model for buying GenAI as a controlled capability, not a vendor relationship. It focuses on contract architecture, measurable commercial levers, and the governance evidence that reduces downside when regulators, auditors, or a board committee ask the hard questions.
1) Start with a procurement definition of “what are we buying”
Before requirements, write a one-page “GenAI Supply Object Definition” that removes ambiguity.
Minimum fields to lock down:
- Delivery shape: embedded feature in an existing platform, standalone SaaS, API consumption, private deployment, or managed service.
- Model scope: which base model(s) and versions; whether the supplier can swap models; whether you can pin versions.
- Data pathways: what enters prompts, what is retrieved (RAG), what is logged, what is retained, what is used for training.
- Tooling: browsers, connectors, code execution, email access, CRM access, ticketing access.
- Output reliance: decision support vs automated decisioning; human approval gates.
- Commercial unit: tokens, calls, seats, “credits”, compute minutes, or “features” with hidden consumption.
Procurement outcome: a clear object that can be benchmarked, priced, risk-rated, and exited.
2) Engineer lock-in out of the contract, not out of your hopes
Lock-in in GenAI is rarely just “vendor choice”. It is usually one of these:
- Prompt and workflow lock-in: business logic becomes a pile of prompts, tools, and agent graphs that only run on one platform.
- Data gravity: embeddings, vector stores, conversation logs, evaluation traces, and fine-tuning datasets become proprietary artefacts.
- Model portability constraints: you cannot export fine-tunes, adapters, system prompts, safety layers, or routing logic in a usable form.
Contract controls that work:
- Portability schedule (deliverables, not principles)Require export formats for: prompts, policies, tool definitions, routing logic, evaluation harnesses, and logs. Specify formats (JSON/YAML) and a minimum documentation standard.
- Right to pin and right to refuse swapsIf the supplier can change the underlying model, require: notice, regression results, and your right to reject changes that degrade quality, cost, latency, or compliance posture.
- Benchmark escrowNot source code escrow. Benchmark escrow. Mandate a jointly-owned evaluation pack: test prompts, gold answers, risk tests, and cost-per-task baselines. If you replatform, you take the pack with you.
- Exit assistance as a priced optionInclude a rate card and service levels for exit: export, deletion certificates, migration support, and knowledge transfer.
3) Kill hidden usage costs with unit economics and metering rights
GenAI cost overruns are rarely “too many users”. They are opaque metering plus multi-dimensional consumption.
Common cost drivers you must surface:
- Input and output tokens (including hidden system prompts and tool call transcripts)
- Context window expansion (cost rises non-linearly as you push more text)
- Retrieval costs (embedding generation, vector DB reads, reranking)
- Tool execution (browsing, code execution, third-party APIs)
- Logging and retention (observability platforms, audit logs, storage)
- Concurrency and rate limits (forcing you into higher tiers)
- Regional hosting premiums (data residency requirements)
Commercial controls that work:
- A “cost per business task” scheduleDefine 5 to 10 target tasks and require the supplier to provide baseline cost bands for each task under agreed assumptions. This becomes a contractual reference point for run rate governance.
- Metering transparency clauseYou need line-item visibility: tokens, tool calls, retrieval ops, and overage triggers. If the provider uses “credits”, require a published conversion table.
- Spend caps with graceful degradationHard caps cause outages. Require “degrade modes”: cheaper model routing, reduced context, tool-call throttling, or human fallback.
- FinOps for AI operating rhythmMonthly cost review with procurement, engineering, and security, with mandatory variance explanations and remediation actions.
4) Make compliance insurable by contracting for evidence
The compliance problem is not “we will comply”. It is “we can prove compliance, at scale, across a supply chain”.
EU AI Act: contract for classification and obligations
The EU AI Act entered into force on 1 August 2024.
It becomes fully applicable on a staged basis, with earlier application for certain obligations and use-cases, and full applicability after the transitional period.
Procurement implication: you need the supplier to state, contractually:
- the role they occupy (provider, deployer, distributor, importer) for your use-case
- the risk classification of the system you are buying (and what changes could move it into high-risk territory)
- the controls and artefacts they will provide to support your obligations (technical documentation, logging, instructions for use, incident support, and transparency features)
If your supplier will not sign up to a clear classification statement and an evidence pack, you are purchasing ambiguity, not capability.
GDPR and processor contracts: stop accepting “platform terms” as sufficient
Where personal data is processed, your contract must reflect Article 28 requirements and provide enforceable control over sub-processors and processing instructions. The ICO sets out what must be included in controller-processor contracts under UK GDPR.
The EDPB guidelines reinforce that the Article 28 contract requirements are not optional “paperwork”; they are core accountability controls.
Procurement-ready GenAI contract requirements:
- sub-processor disclosure and approval mechanics
- retention limits for prompts, logs, and outputs
- training restrictions (no training on your data unless explicitly opted in with defined scope)
- support for data subject rights and deletion workflows
- cross-border transfer mechanism clarity (where applicable)
Use recognised governance frameworks as your evidence backbone
Insurers and auditors trust repeatable management systems more than bespoke slideware.
- ISO/IEC 42001:2023 is an AI management system standard built around governance and continuous improvement.
- NIST AI RMF provides a structured approach to governing, mapping, measuring, and managing AI risks.
Contract for alignment: require the supplier to map their controls to your chosen framework and provide the artefacts on request.
5) The clause set that actually matters for GenAI
Most GenAI contracts win or lose on a small set of terms.
- Data use and trainingExplicitly define: “Customer Data”, “Customer Content”, “Derived Data”, “Service Improvement”, “Training”. Prohibit training on customer data by default. If training is allowed, define scope, purpose, retention, and opt-out mechanics.
- Security and model-specific threatsInclude controls for prompt injection, data exfiltration via tool use, malicious retrieval content, and cross-tenant leakage. Require security testing evidence and incident response procedures that explicitly include model behaviour failures.
- Change control for models and safety layersModel updates can change output risk. Require: notice, testing evidence, and rollback options.
- Audit and assuranceDo not rely solely on generic SOC reports. Require the right to receive: model documentation, logging summaries, and incident metrics relevant to your use-case.
- Liability that matches AI realityIf the supplier disclaims everything that matters (IP infringement, data leakage, regulatory breaches caused by their platform defects), you do not have a risk transfer mechanism. Your goal is not unlimited liability; it is aligned liability around the supplier’s controllable risk.
6) Due diligence questions that separate mature suppliers from demos
Use these as a procurement gate before negotiation time is wasted.
- Can you pin model versions, and what is your model swap policy?
- What is your prompt, log, and output retention default and configurable range?
- Do you use customer data for training or “service improvement”, and how is that technically enforced?
- What is your sub-processor list, and how do you notify changes?
- What is your metering model down to line items (tokens, tools, retrieval)?
- What evidence do you provide for risk management (ISO 42001 alignment, NIST AI RMF mapping, internal assessments)?
- What is your incident taxonomy for model failures, and how fast do you notify?
- Can you produce a customer-specific evidence pack suitable for audit and regulator questions?
If the supplier cannot answer these crisply, your negotiating leverage is already gone.
7) Build exit from day one: the “replatform in 60 days” test
Procurement should require an exit design that can be executed inside a standard termination window.
Minimum exit deliverables:
- exports: prompts, tool definitions, routing logic, evaluation pack, conversation logs (where permitted), embeddings if owned by you
- deletion: time-bound deletion plus a certificate
- transition: knowledge transfer sessions and migration support at agreed rates
If the supplier cannot support a credible replatform scenario, treat the contract as a long-term strategic dependency and price the risk accordingly.
8) The operating model: procurement becomes a control plane
GenAI procurement is not a one-off event. It is a control plane that links:
- commercial governance (run rate, caps, unit economics)
- risk governance (model changes, incidents, compliance evidence)
- delivery governance (quality, task success rates, adoption without shadow AI)
A simple structure that works in large enterprises:
- quarterly vendor governance chaired by Procurement, with CIO and CISO delegates
- monthly FinOps for AI review with engineering
- a maintained “AI supplier dossier” per vendor: classifications, sub-processors, evidence pack, cost baselines, incident history
This is how you keep speed without losing control.
Where Strategic AI Guidance Ltd fits
If you want this playbook operationalised, Strategic AI Guidance Ltd supports procurement teams with: GenAI contracting standards, cost-per-task unit economics, supplier due diligence packs, and governance evidence models aligned to ISO/IEC 42001 and NIST AI RMF, with regulatory readiness for EU AI Act and GDPR obligations.