If all the models are interchangeable at the API layer, wouldn't they be incentivized to add value at the next level up and lock people in there to prevent customers from moving to competitors on a whim.
Just the other day, a 2016 article was reposted here [https://www.hackerneue.com/item?id=46514816] on the 'stack fallacy', where companies who are experts in their domain repeatedly try and fail to 'move up the value chain' by offering higher-level products or services. The fallacy is that these companies underestimate the essential compexities of the higher-level and approach the problem with arrogance.
That would seem to apply here. Why should a model-building company have any unique skill at building higher-level integration?
If their edge comes from having the best model, they should commoditize the complement and make it as easy as possible for everyone to use (and pay for) their model. The standard API allows them to do just this, offering 'free' benefits from community integrations and multi-domain tasks.
If their edge does not come from the model – if the models are interchangeable in performance and not just API – then the company will have deeper problems justifying its existing investments and securing more funding. A moat of high-level features might help plug a few leaks, but this entire field is too new to have the kind of legacy clients that keep old firms like IBM around.
The non-SOTA companies will eat more of this pie and squeeze more value out of the SOTA companies.
Models are pretty much democratized. I use Claude Code and opencode and I get more work done these days with GLM or Grok Code (using opencode). Z.ai (GLM) subscription is so worth it.
Also, mixing models, small and large ones, is the way to go. Different models from different providers. This is not like cloud infra where you need to plan the infra use. Models are pretty much text in, text out (let's say for text only models). The minor differences in API are easy to work with.