I’ve been in a lot of conversations with CEOs and board members lately where the topic naturally turns to AI, and I’ve noticed something interesting: the questions being asked often reveal more about the questioner’s assumptions than they do about the technology itself. There’s a tendency to treat AI as though it were traditional software: deterministic, reliable, and fundamentally understandable. This is a category error, and it’s one that could prove expensive.

Let me be clear about why I’m writing this. We’re at the beginning of 2026, and the decisions being made right now about AI investments and implementations will shape companies, and even economies, for years to come. The real problem is that much of the conversation is happening at the wrong level of abstraction — either too focused on the hype, or too mired in technical detail, so that it obscures the strategic questions. What boards actually need is a framework for thinking about AI that acknowledges both its transformative potential and its very real limitations.

The data reality

There’s something that rarely gets discussed in the breathless press coverage: AI, in its traditional form, is extraordinarily hungry for data. I’m talking about the predictive models that actually drive business value — the systems that forecast demand, detect fraud, optimise pricing. These aren’t magic. They work by finding patterns in historical data and extrapolating those patterns forward. The universal approximation theorem tells us that neural networks can, in theory, learn any function. What it doesn’t tell us is how much data you need to get there.

The answer, frustratingly, is “a lot.” And not just any data: clean, labelled, representative data that actually captures the phenomena you’re trying to predict. This creates a fundamental asymmetry in the AI landscape. Large incumbents with years of accumulated data have a structural advantage that’s very difficult to overcome. Your portfolio company might have a brilliant team and a clever algorithm, but if they don’t have the data to train it on, they’re bringing a knife to a gunfight.

This matters for board-level strategy in several ways. First, data acquisition and data quality should be treated as top-level strategic assets, not IT housekeeping. Second, any AI roadmap needs to honestly assess where the training data will come from and whether it’s sufficient. Third, be deeply sceptical of vendors who claim their models can work with limited data. They might be right, but the burden of proof should be on them.

The accuracy problem

Here’s where things get philosophically interesting, and strategically important. Traditional software is deterministic. You put the same input in, you get the same output out. If there’s a bug, you can trace it, fix it, and have confidence that the fix works. AI models are fundamentally different. They’re probabilistic systems that make predictions based on learned patterns, and those predictions come with an irreducible uncertainty.

What does this mean in practice? It means that even a very good AI model will be wrong some percentage of the time. It means that the model’s performance can degrade without warning if the underlying data distribution shifts (a phenomenon charmingly called “concept drift”). It means that you can’t check the quality of an AI system the way you would for traditional software.

I’ve seen boards approve AI projects with the same governance frameworks they’d use for a new ERP implementation, and this is a mistake. AI systems require different risk management approaches: continuous monitoring of model performance, clearly defined fallback procedures when confidence scores are low, and honest acknowledgment that “the model said so” is not the same as “we know this is correct.”

The really dangerous failure mode here isn’t that the model is obviously wrong — it’s that it’s subtly wrong in ways that aren’t immediately apparent. A fraud detection model that catches 95% of fraud sounds great until you realise that the 5% it misses are the most sophisticated cases, leaving you exposed to the exact threats you most needed to catch. If the incidence of a disease in the population is 1%, a model can get 99% accuracy just by predicting that nobody has the disease. Big numbers don’t necessarily make AI useful.

The generative shortcut (and why it’s dangerous)

Now, you might be thinking: “But what about ChatGPT? What about Claude? These systems seem to work without us having to train them on our specific data.” And you’d be right. Generative AI has fundamentally changed the economics of AI deployment. These foundation models have been trained on essentially the entire public internet, and they can be applied to new problems with minimal customisation.

This is genuinely transformative, and I don’t want to downplay it. But the shortcut comes with risks that boards need to understand:

  1. Repeatability is not guaranteed. Ask the same question twice and you might get different answers. This is fine for creative brainstorming; it’s a problem if you’re trying to build a reliable business process. The stochastic nature of these models means that audit trails become complicated and regulatory compliance becomes harder to demonstrate.

  2. Plausibility is not accuracy. Generative models are optimised to produce text that sounds correct, not text that is correct. This is the famous “hallucination” problem, and it’s not a bug that can be patched out. It’s intrinsic to how these models work. They will confidently cite papers that don’t exist, invent statistics, and fabricate historical events — all while sounding entirely authoritative.

  3. Explainability is limited. When a predictive model makes a decision, you can often interrogate it to understand why: which features drove the prediction, how confident the model is. Generative models are vastly more opaque. Good luck explaining to a regulator exactly why your AI-powered system made a particular recommendation.

  4. Predictive and generative are different beasts. There’s a tendency to conflate these capabilities because they both get labelled “AI,” but they’re suited to very different problems. Generative AI excels at synthesis, summarisation, and creative tasks. If you need to predict which customers will churn next month, you still need a predictive model trained on your specific data.

The ethics minefield

Let me tell you about a conversation I had recently with a founder who was very excited about their AI-powered recruitment tool. The pitch was compelling: use AI to screen CVs more efficiently, reduce human bias, find candidates that traditional processes would miss. The problem? When I asked about the training data, it turned out they had purchased data from sources they couldn’t fully verify, and planned to scrape LinkedIn (against the licensing agreements) to patch the gaps.

This is not the edge case you might imagine. The hunger for training data creates powerful incentives for data collection practices that are, at best, ethically questionable, and at worst, illegal. The regulatory landscape is tightening fast: GDPR in Europe, state-level privacy laws proliferating in the US, and new AI-specific regulations emerging globally.

What should boards be doing? Three things:

  • Demand data provenance documentation. Where did the training data come from? What consent mechanisms were in place? Can this be demonstrated to a regulator?
  • Push for data minimisation by default. The old instinct was to collect everything and figure out what to do with it later. This approach is now a liability. Collect only what you need, delete what you don’t, and be able to demonstrate why you need what you keep.
  • Build robust guardrails around PII. Personally identifiable information requires special handling at every stage: collection, storage, processing, and deletion. This isn’t just about compliance; it’s about building systems that your customers can trust.

The price question

There’s an elephant in the room: the current economics of generative AI are almost certainly unsustainable. The major providers — OpenAI, Anthropic, Google — are engaged in a land-grab, pricing their services to drive adoption rather than to reflect actual costs. The infrastructure required to run these models is staggeringly expensive, and someone (likely us end users) is going to have to pay for it eventually.

My strong expectation is that we’ll see significant price increases for generative AI services over the next 12-24 months. Some of the business cases that look compelling today will look very different when the API costs triple. This isn’t a reason to avoid the technology — it’s a reason to build your strategy with realistic long-term cost assumptions rather than treating current promotional pricing as the baseline.

Boards should be asking: “What does our business case look like if AI costs increase by 3x? By 5x?” If the answer is “the project no longer makes sense,” that’s important information to have now rather than later.

Strategic recommendations for 2026

First, cultivate healthy scepticism without becoming a Luddite. AI is genuinely transformative, and companies that ignore it will be left behind. But not every problem needs an AI solution, and not every AI solution will work. Your job is to ask the hard questions that cut through the hype — not to be the person who says “no” to everything, but to be the person who ensures that “yes” is based on substance rather than FOMO.

Second, invest in data infrastructure now. Whatever your portfolio companies end up doing with AI, they’ll need clean, well-organised, ethically-sourced data to do it with. This is boring, unglamorous work that doesn’t make for exciting investor updates, but it’s the foundation that everything else rests on.

Third, build AI governance capabilities. This means people who understand both the technology and the business context, processes for evaluating and monitoring AI systems, and clear accountability for when things go wrong. Don’t outsource this entirely to vendors or consultants — you need internal capacity, even if it’s not someone’s full-time job.

Fourth, watch the regulatory landscape closely. AI regulation is coming fast. The EU AI Act is already in force, and similar frameworks are emerging globally. Companies that treat compliance as an afterthought will find themselves in trouble.

Fifth, plan for cost volatility. Build flexibility into your AI strategies. Avoid deep lock-in to specific providers where possible. Model your business cases with conservative cost assumptions.

The companies that will win in the AI era won’t necessarily be the ones that adopt fastest. They’ll be the ones that adopt smartest. Your role as a board member is to ensure that your portfolio companies are asking the right questions, building on solid foundations, and making decisions based on realistic assessments rather than vendor promises or competitive anxiety.

AI isn’t magic. It’s a powerful set of tools with specific capabilities and limitations. Boards that understand those limitations — and ask the hard questions that flow from them — will serve their companies far better than those who simply wave through whatever the technology team puts in front of them.

This post also appears on my Substack.