This whole article all started with a simple enough, and far from original, question:
How can organisations get something really valuable out of generative AI tools?
It’s a question that lots of people have been wrestling with for a long time, and it’s far from settled. Personally, I have previously nailed my colours to the mast that I think it’s the human side of the equation that should be much more involved, even writing a white paper with a friend on why and how HR can lead these sorts of transformations (available at: humane-ai.io). Nowadays I am inclined to go even further than this, and say that maybe the right way is as follows: to harness value from a new tool, we must first start from the source of that value — the creative human consciousness which is using the tool.
Let me try to explain why this has never been more true than it is for GenAI, and why we should be especially careful about that value when using AI tools.
A Tool Without a Manual
Some of the greatest inventions of human history have fallen into this category. The steam engine was originally conceived as a way of pumping water out of mine workings, a pretty niche application of such a powerful technology. It took human creativity to see that the same source of power could also move human beings along tracks at record speeds. Similarly, the electric motor was originally created as a drop-in replacement for steam engines powering factories — still linked to all the machines on the shop floor by terrifying and dangerous rotating shafts. It was only after a lot of work on the engine had happened that smart engineers realised that they could miniaturise the machines to fit on the work benches, and move the electricity to the machines rather than needing the frightening accoutrements of 150 years of industrial production.
It’s the same with Generative AI — this is (I believe) one of the first Silicon Valley products that has been released that doesn’t have a defined market, and without a customer segment in mind. We are told as consumers “here’s this tool that will change everything. Over to you to work out how!”. On the surface of it, this is a very empowering phrase — we the user get to define the product, how often does that happen? However, it has also spawned a league of snake oil salesmen (and consultants). “Not getting the most value out of Generative AI? Take our course to discover the 12 points that will make you as productive as your smarter, better-looking competitors!”.
There’s another, more insidious effect of this approach too — largely stemming from the over-hyping of AI by the companies that create them (because obviously they are incentivised to do this!). The error is the following: when someone does some fantastic creative thing with the AI tool (and I’ll admit that truly creative work with these tools is still very rare), we ascribe the credit to BOTH the person AND the tool. Michelangelo carved David out of a block of marble using simple metal chisels, but I don’t see people flocking to see other works of art that the chisels have carved. The creative act is intrinsically a human thing — we can use tools to make it easier, or even to enable us to use techniques and approaches that would otherwise be impossible, but the credit in that scenario should go to the human, not the tools.
This misattribution in the case of AIs is highly dangerous — we once again find ourselves ascribing a human-like property (creativity) to what is just a mechanistic tool like a chisel. No wonder we are having these strange discussions around AI and consciousness — how could it be otherwise?
Who Knows This Already?
There is a place where this point of view is not news, and it’s inside the big AI companies. We can argue about the details, but it’s become clear that the base performance of the core models powering ChatGPT, Claude and Gemini plateaued towards the end of 2024. Benchmarks aside, which can be accidentally or deliberately gamed, most progress since then has come from more advanced tool-calling and more fine tuning from human feedback. 2025 was the year of giving them more and more conventional software tools, like the ability to run sub-flows to produce documents or write code within a sandbox. It was also the year of “scaling test time compute” — giving the model a larger budget of tokens to “think” through problems, rather than expecting them to one-shot everything. This last approach has been quite successful at improving user-rate performance, at the price of greatly increasing the cost-per-query the model providers have to pay (and subsidise us users on).
The core models are unlikely to get much better using the same LLM-type approach in all honesty — the “easy to drill” raw data that they can be used for training has all been extracted, and we’re left with the cost of spending billions of dollars for an extra trillion tokens, which will have at best a marginal effect on performance. This leaves the core providers in something of a bind — they can try to marginally push performance by harvesting more base data (expensive), they can decide to subsidise us users even more with more test time compute (very very expensive already!), or they can try to design more tools. Guess which one they have picked…
A particular case in point is Claude Code — a tool that was originally intended to help software developers like myself write code more quickly by writing prompts in natural language, and getting back a project full of nicely written code that does the job. Boris Cherny, the creator of Claude Code, even said that he hasn’t written a line of code himself since November 2025, and that 90% of the codebase for Claude Code is written with Claude Code. As a product decision this absolutely makes sense — since Anthropic was finding serious traction with the software devs of the world, why not release a tool for them to use to make the experience more pleasant, and capture more market share (even if the economics of this mean that “capturing more market share” directly translates to “lighting more VC money on fire”).
What has been very interesting to watch has been two parallel streams of developments:
- A number of groups of people who do not have conventional software engineering backgrounds have started using Claude Code to do work in their fields (academic researchers, lawyers).
- The reduction in security quality at Anthropic (possibly as a result of using Claude Code) culminated in the accidental release of the source code for Claude Code.
Let’s look into that.
Technofeudalism
To understand the thrust of my argument, let’s take a step back to the heady days after the 2008 financial crisis. In this period, a bunch of tech startups who had previously been somewhat niche started to break through — the so-called “social networks”. At the same time, online companies like Amazon.com really took off, and in both cases the new solutions allowed their users access to something they hadn’t had before — other users or vendors across the world. As a user of Facebook or Amazon, you were allowed into the walled garden that they had put up for you — you could get into their “cloud kingdom”. However, access always comes with a price, and as the adage goes, “if you’re not paying for a product, you are the product”. Essentially, we moved from a world in which profits come from providing a service that people want in a free market to profiting from “rents” — the cost of entry to the cloud kingdoms. It’s everywhere today, from how we consume media to heated seats in cars being “subscription services”.
There’s another more insidious aspect to this sea change, though — capture of creativity. On Amazon, sellers had to come up with new product ideas (in the style of the old postal catalogues, or Dragon’s Den) in order to stay relevant. If they didn’t, they had to compete on price in a cloud kingdom where the guy in charge took 40% of their revenue. On Facebook, people were encouraged to produce “posts” — things that would encourage more people to connect with them, or to curate their “friends” — essentially free work that the company could then monetize by selling ads to companies. In both cases, once you were in the gates, you still had to fight to survive — doing free work from the point of view of the platforms that they could then monetize.
In the words of Yanis Varoufakis, author of Technofeudalism, this signalled the end of capitalism, and the replacement of the dominant business model of tech as “technofeudalism”. As users, we are no longer being remunerated for our efforts, we are often charged a fee, granted access to the cloud fiefdoms of big tech, and in return, we do work for them for free…
In Which I Claim Claude Code as a “Tech Dreamcatcher”
So, this brings us neatly back to Claude Code, and specifically that leak of the source code I mentioned earlier. There have been a number of takedowns looking into the code quality, what is in there but not released, and more generally how this fast-growing bit of software has been developed. What many have commented on is the level of telemetry — the sheer amount of data that Anthropic collects from every session you have with the tool, including what you asked for and indicators of where you got annoyed (there’s a “swear jar” in there, so be nice to Claude!).
Obviously this telemetry is very useful in building the product, and collecting user feedback in many forms to keep the product working for those of us that use it. The data are, however, also extremely useful in harvesting the thoughts of the hundreds of thousands of users to build new tools. You know, things like legal review tools that caused a rout of stocks that had conventionally been linked to those workflows earlier in the year (remember our lawyers using Claude Code for document review?). Those of us using the tool are once again being made into “cloud serfs” — doing work we think is for ourselves using Claude Code, but actually feeding the machine. As with all tools in the internet age, we have to ask when something new is released — “Cui bono?”. Critical thinking hinges on questioning the motives of your sources, and that very definitely includes the companies who are releasing new tools.
So What Can We Do?
We are essentially faced with two choices:
- Forgo AI tools when working on things we don’t want to be copied.
- Lean in to distribution, or work on things which are sufficiently “niche” that the AI companies won’t be interested in copying them.
I want to argue that maybe that choice is not such a hard one after all — there’s a lot of fear of missing out (FOMO) in the AI world right now, epitomised by Reddit threads full of people who are vibe coding apps in a weekend that make them millions. In brief, my rebuttal is this — the rate-limiting step in creating decent quality software is a human being taking time to understand the code so that they can continue to maintain it. We already know this — when a new team member arrives in a software team, we make sure that they have time to get up to speed on the context of the project, and if they leave we mourn the loss of “experience with the codebase”. So maybe the speed-up is illusory when it comes to writing real software? There’s certainly research that suggests so. So maybe removing ourselves from the ecosystem entirely isn’t such a costly move, if protecting our ideas demands it.
Takeaways for Leaders
There are really two key takeaways for decision makers from this article. The first is that chasing the latest and greatest AI model is really a category error — in doing that you’re putting the exclamation point in the wrong place. The real source of value from using AI tools is the people using them, not the tools themselves: work on the people in preference to anything else. This is another facet of what I like to call the “conservation of hardness” problem. I have already described this in another context when talking about AI — access to data. The companies that did the hard yards back in the “big data” era (2016ish) to organise their data carefully and make sure that it was accessible have had a much easier time plugging in AI search tools. Those who didn’t do that work are struggling, and finding that they couldn’t short-cut the basic infrastructure work with LLMs.
The same is true of culture. If you have spent the time growing your team, creating an inclusive and psychologically safe work environment, and are already able to reap the benefits in terms of increased innovation — congratulations, your “implementing AI” can be as simple as giving people some common sense guidelines and a ChatGPT subscription. If you haven’t, if you have stifled innovation by top-down leadership and command and control tactics, here’s a surprise for you: “implementing AI” isn’t going to yield the benefits your board hope for. At least, not until you fix the culture. Work on the human first.
The second point is also important, but a little different — be careful believing that tools are meant to help you as the consumer. We’ve all seen that Instagram has negative mental health consequences for young people, that Amazon makes outsized profits by jacking up prices and controlling supply, and that Uber hasn’t exactly made the whole taxi experience better. Tech companies always look after their shareholders ahead of their users once they get big enough — and so it is with Generative AI companies too. The difference is that they seem to have decided that their “oil” is now your ideas — so be circumspect in what you talk to Claude about. You might just find next month that “there’s a tool for that”…
This post also appears on my Substack.