commodities
I came across something that connected a few dots for me a couple days ago. The Information's reported:
Apple still has a team working on its own internal models that it could take advantage of in the future. But some Apple leaders hold the view that large language models will become commodities in the years to come and that spending a fortune now on its own models doesn’t make
Why hasn't Apple moved aggressively to compete in the development of SOTA LLMs? Well, because of course; selling tokens isn't a profitable market over the longterm. That's why they're working with Google. It's irrelevant that at this point in time their models are behind, because eventually all labs will achieve rough equivalency and differentiation will be a meaningless concept. Simultaneous to me reading about this Anthropic publishes their "consitution" document for Claude. In it they express what they hope represents the "soul" of claude. What comes through most from that document is their apparent conviction in the basic notion of a personality as applied to an LLM. That one can impress upon a model a kind of sensibility. That a set of weights can accumulate to some sort of irreducibility.
"We believe Claude may have 'emotions' in some functional sense—that is, representations of an emotional state, which could shape its behavior, as one might expect emotions to. This isn't a deliberate design decision by Anthropic, but it could be an emergent consequence of training on data generated by humans, and it may be something Anthropic has limited ability to prevent or reduce."
If Claude has emotions, it is unlike others in the way you or are I are unlike one another by dint of our subjective experience. This becomes a moat, in an odd way, and a defensible one at that, insofar as people acquire a preference for the perceived emotional interior of one model over another. Your mind or my mind are not commodities, despite what your boss says, and nor are the matrices motivating Claude's inference. This is differentiation. This is creating value. Deepseek doesn't have a soul. It wasn't trained on Blackwells, and a TPU will never birth one. Some close reading here can do wonders.
Meanwhile, OpenAI continues to cast about for their own corner of the world to own and defend. Among their ventures, their partnership with Jony Ive jumps to front of mind. Why attempt hardware, if you build the best models in the world? If you're leagues ahead of everyone else out of the gate, why venture into territory you have no experience in? Especially hardware, of all things. Apple would probably say something like: "inference can't outcompete hardware". And they would be right. Exploring hardware is not a positioning the most famous AI company in the world takes if they believe in the longterm marginal value of their primary product: tokens.
If the tokens aren't valuable, then what is? Basically all consumptive uses, probably. Which is still very much wip, because it's not easy. It's probably worth pointing out that it's actually been quite a while since anyone was blown away by the raw capability of a frontier model from a private american company. As I write this Claude Code is starting to break through to the broader knowledge-working public, and for good reason. But that's a harness. It's not a model. A lot of people like Opus 4.5 quite a bit, but I'd argue the performance gains between Sonnet 4.5 or Opus 4 and Opus 4.5 really aren't magnificent, as someone who's used all three inside claude code. Boris Cherny just really knocked it out of the park with Claude Code. Which I'm very thankful for. I've probably averaged ten hours a day on it almost every day since last April. But the point here is stark; the ways of using LLMs have outperformed and outshined the LLMs themselves now for almost a year. And DeepSeek 5 is coming! What are people going to do when opencode catches up to Claude Code and you can plug in GLM 4.8 or DeepSeek into it and pay for it with the compute being subsidized by the largest collective capital expenditure marshalled by an ~~empire~~ civilization in human history?
It gets darker. SAAS is dead. Uber can build their own perfectly tuned ticketing system in two weeks. The frontier labs can't compete over the longterm because of the race to the bottom on token pricing. Everything - all code, basically - becomes trivially copyable. Compute will probably catch up to demand and then overshoot, cratering pricing and killing anyone's margins who's trying to compete and survive in that space. Distribution persists, of course. Apple will look very smart when they're buying up compute by the boatload and still have enough liquid capital to buy up whichever lab deciders to do a firesale. The word here is "fugitive." Value exists, it's real, but can't be held. The moment you demonstrate something works, the demonstration is a recipe for replication. When the entire stack is "just text," "just prompts," "just files," there's nowhere for margin to hide. Maybe this is a good thing.
There is another version to al lthis. The pessimism assumes a fixed pie. That the work being automated is the work that exists, full stop. But efficiency tends to expand consumption. Coal engines got more efficient, total coal usage increased. Spreadsheets automated bookkeeping, we didn't fire the accountants, we did more accounting.
Latent demand is everywhere, constrained by cost. Contracts go unreviewed because $400/hour isn't justified for a small deal. Make it $40/hour equivalent and every small business reads their vendor agreements. M&A due diligence only happens above a certain deal threshold because the cost doesn't scale down. Automate classification and extraction, suddenly $10M deals get the same scrutiny as $100M deals. Niches that couldn't support dedicated software become viable. You can't build a SaaS for veterinary clinics specializing in exotic birds in the Pacific Northwest because the market can't justify the development cost. But if development cost approaches zero, arbitrarily small verticals become addressable. The long tail of software gets longer.
The pattern: automate level N, level N gets cheaper, more total activity, more demand for level N+1 oversight. More PMs to direct more products. More engineers to oversee more codegen. More lawyers managing insane volume. The floor of valuable contribution rises, but so does the ceiling. Your ability to write a notification system matters less. Your ability to know which notification system to build, and why, and for whom, matters more.