I agree. Right now a lot of AI tools are underpriced to get customers hooked, then they'll jack up the prices later. The flaw is that AI does not have the ubiquitous utility internet access has, and a lot of people are not happy with the performance per dollar TODAY, much less when prices rise 80%. We already see companies like Google raising prices stating it's for "AI" and we customers can't opt out of AI and not pay the fee.
At my company we've already decided to leave Google Workspace in the spring. GW is a terrible product with no advanced features, garbage admin tools, uncompetitive pricing, and now AI shoved in everywhere and no way to granularly opt out of a lot of it. Want spell check? Guess what, you need to leave Gemini enabled! Shove off, Google.
I'm going through the process of buying a home, and the amount of help its given is incredible. Analyzing disclosures, loan estimates, etc. Our accountant charges $200 an hour to basically confirm all the same facts that ChatGPT already gave us. We can go into those meetings prepped with 3 different scenarios that ChatGPT already outlined, and all they have to do is confirm.
Its true that its not always correct, but, I've also had paid specialists like real estate agents and accountants give me incorrect information, at the cost of days of scheduling, and hundreds of dollars. They also aren't willing to answer questions at 2am in the morning.
I agree. Wait until it's $249 a month. You'll feel differently.
Competition and local SLMs will also provide a counter to massive price increase
Or much like what is going to happen with Alexa, it just dies because the cost of the service is never going to align with “what the market can bear”. Even at $75/mo, the average person will probably stop being lazy and just go back to doing 10 minutes worth of searching to find answers to basic questions.
The sellers already had an inspection done. The full report is over 100 pages
Yea, I think this is wrong. The analogy is more like the App Store, in that there is very little to do currently other than a better Google Search with the product. The bet is that over time (short time) there are much more financially valuable use cases with a more mature ecosystem and tech.
We're in the "dial up era" of AI.
Unlike the smartphone adoption era where everything happened rather rapidly, we're in this weird place where labs have invented a bunch of model categories, but they aren't applicable to a wide variety of problems yet.
The dial up -> broadband curve took almost a decade to reach penetration and to create the SaaS market. It's kind of a fluke that Google and Amazon came out of the dial up era - that's probably what investors were hoping for by writing such large checks.
They found chat as one type of product. Image gen as another. But there's really not much "native AI" stuff going about. Everyone is bolting AI onto products and calling it a done day (or being tasked with clueless leadership to do it with even worse results).
This is not AI. This is early cycle WebVan-type exploration. The idea to use AI in a given domain or vertical might be right, but the tools just don't exist yet.
We don't need AI models with crude APIs. We need AI models we can pull off the shelf, fine tune, and adapt to novel UI/UX.
Adobe is showing everyone how they're thinking about AI in photoshop - their latest conference showed off AI-native UX. And it was really slick. Dozens of image tools (relighting, compositing, angle adjustment) that all felt fast, magical, and approachable as a beginner. Nobody else is doing that. They're just shoving a chat interface in your hands and asking you to deal with it.
We're too early. AI for every domain isn't here yet.
We're not even in the dialup era, honestly.
I'd expect the best categories of AI to invest in with actually sound financials will be tool vendors (OpenRouter, FAL, etc.) and AI-native PLG-type companies.
Enterprise is not ready. Enterprise does not know what the hell to do with these APIs.
In any given day I never have no access to free LLM help.
Since all the models are converging onto the same level of performance, I mostly can't even tell responses from ChatGPT and Claude apart.
> Right now a lot of AI tools are underpriced to get customers hooked, then they'll jack up the prices later.
Good luck with that. I mean it.
The ChatAI TAM is now so saturated with free offerings that the first supplier to blink will go out of business before they are done blinking.
I see people (like sibling reply to parent) boasting about the amount of value they get from the $20/m subscription, but I don't see how that is $20 better than just using the free ChatAIs.
The only way out of the red for ChatAI products is to add in advertising slowly; they have to boil the frog. A subscription may have made sense when ChatGPT was the only decent game in town. Subscriptions don't make sense now - I can get 90% of the value of a ChatAI for 0% of the cost.
Absolutely, not only are most AI services free but even the paid portion is coming from executives mandating that their employees use AI services. It's a heavily distorted market.
And a majority of those workers do not reveal their AI usage, so they either take credit for the faster work or use the extra time for other activities, which further confounds the impact of AI.
This is also distorting the market, but in other ways.
People are missing the forest for the trees here. Being the go to consumer Gen AI is a trillion+ dollar business. How many 10s of billions you waste on building unnecessary data centers is a rounding error. The important number is your odds of becoming that default provider in the minds of consumers.
I used ChatGPT for every day stuff, but in my experience their responses got worse and I had to wait much longer to get them. I switched to Gemini and their answers were better and were much faster.
I don’t have any loyalty to Gemini though. If it gets slow or another provider gives better answers, I’ll change. They all have the same UI and they all work the same (from a user’s perspective).
There is no moat for consumer genAI. And did I mention I’m not paying for any of it?
It’s like quick commerce, sure it’s easy to get users by offering them something expensive off of VC money. The second they raise prices or offer degraded experience to make the service profitable, the users will leave for another alternative.
I haven't seen any evidence that any Gen AI provider will be able to build a moat that allows for this.
Some are better than others at certain things over certain time periods, but they are all relatively interchangeable for most practical uses and the small differences are becoming less pronounced, not more.
I use LLMs fairly frequently now and I just bounce around between them to stay within their free tiers. Short of some actual large breakthrough I never need to commit to one, and I can take advantage of their own massive spends and wait it out a couple of years until I'm running a local model self-hosted with a cloudflare tunnel if I need to access it on my phone.
And yes, most people won't do that, but there will be a lot of opportunity for cheap providers to offer that as a service with some data center spend, but nowhere near the massive amounts OpenAI, Google, Meta, et al are burning now.
As a regular user, it becomes increasingly frustrating to have to remind each new chat “I’m working on this problem and here’s the relevant context”.
GenAI providers will solve this, and it will make the UX much, much smoother. Then they will make it very hard to export that memory/context.
If you’re using a free tier I assume you’re not using reasoning models extensively, so you wouldn’t necessarily see how big of a benefit this could be.
LLMs complete text. Every query they answer is giving away the secret ingredient in the shape of tokens.
Markets that have a default provider are basically outliers (desktop OS, mobile OS, search, social networks, etc).
All other industries don't have a single dominant supplier who is the default provider.
I am skeptical that this market is going to be one of the outliers.
All the other markets with a default provider basically rely on network effects to become and remain the default provider.
There is nothing here (in this market) that relies on network effects.
I do wonder, if you (and the commenter you replied to) think this is a good thing, will you be OK with a data center springing up in your neighbourhood, driving up water or power prices, emitting CO2? Then if SOTA LLMs become efficient enough to run on a smartphone will you be OK with a data center bailout coming from your tax dollars?
[0]: https://www.mckinsey.com/industries/technology-media-and-tel...
So voice assistants backed by very large LLMs over the network are going to win even if we solve the (substantial) battery usage issue.
I use LLM’s all day and a highly doubt this. I’d love to hear your argument for how this plays out.
Past successes like Google encourage hope in this strategy. Sure, it mostly doesn't work. Most of of everything that VCs do doesn't work. Returns follow a power law, and a handful of successes in the tail drive the whole portfolio.
The key problem here doesn't lie in the fact that this strategy is being pursued. The key problem is that it is rare for first mover advantages to last with new technologies. That's why Netscape and Yahoo! aren't among the FAANGs today. The long-term wins go to whoever successfully create a sufficient moat for themselves to protect lasting excess returns. And the capabilities of each generation of AI leapfrogs the last so well that nobody has figured out how to create such a moat.
Today, 3 years after launching the first LLM chatbot, OpenAI is nowhere near as dominant as Netscape was in late 1997, 3 years after launching Netscape Navigator. I see no reason to expect that 30 years from now OpenAI will be any more dominant than Netscape is today.
Right now companies are pouring money into their candidates to win the AI race. But if the history of browsers repeats itself, the company that wins in the long-term would launch in about a year from now, focused on applications on top of AI. And its entrant into the AI wars wouldn't get launched until a decade after that! (Yes, that is the right timeline for the launch of Google, and Google's launch of Chrome.)
Investing in silicon valley is like buying a positive EV lottery ticket. An awful lot of people are going to be reminded the hard way that it is wiser to buy a lot of lottery tickets, than it is to sink a fortune into a single big one.
Incorrect. There were about 150 millions Internet users in 1998, or 3.5% of total population. The number grew 10 times by 2008 [0]. Netwcape had about 50% of the browser market at the time [1]. In other words, Netscape dominated a small base and couldn’t keep it up.
ChatGPT has about 800 millions monthly users, or already 10% of total current population. Granted, not exclusively. ChatGPT is already a household name. Outside of early internet adopters, very few people knew who Netscape or what Navigator was.
[0] https://archive.globalpolicy.org/component/content/article/1...
[1] https://www.wired.com/1999/06/microsoft-leading-browser-war/...
According to https://en.wikipedia.org/wiki/Usage_share_of_web_browsers, Netscape had 60-70% market share. According to https://firstpagesage.com/reports/top-generative-ai-chatbots..., ChatGPT currently has a 60% market share.
But I consider the enterprise market a better indicator of where things are going. As https://menlovc.com/perspective/2025-mid-year-llm-market-upd... shows, OpenAI is one of a pack of significant competitors - and Anthropic is leading the pack.
Furthermore my point that the early market leaders are seldom the lasting winners is something that you can see across a large number of past financial bubbles through history. You'll find the same thing in, for example, trains, automobiles, planes, and semiconductors. The planes example is particularly interesting. Airline companies not only don't have a good competitive moat, but the mechanics of chapter 11 mean that they keep driving each other bankrupt. It is a successful industry, and yet it has destroyed tons of investment capital!
Despite your quibbles over the early browser market, my broader point stands. It's early days. AI companies do not have a competitive moat. And it is way to premature to reliably pick a winner.
Netscape in 1997/1998 had about 90% of the target market.
OpenAI today has about 30% of the target market, maybe? (seeing as how every single Windows installation comes with copilot chat already, it's probably less. Every non-tech user I know has already used copilot because it was bundled and Windows prompted them into using it with a popup. Only one of those non-tech users even heard of OpenAI, maybe 50% of them have heard that there are alternatives to Copilot, but they still aren't using those alternatives)
The local open source argument doesn't hold water for me -- why does anyone buy Windows, Dropbox, etc when there's free alternatives?
Installing an OS is seen as a hard/technical task still. Installing a local program, not so much. I suspect people install LLM programs from app stores without knowing if they are calling out to the internet or running locally.
No one buys Windows - it comes with the PC.
If people were shipped blank computers and told to order the OS separately, they wouldn't be buying Windows at the current price point.
See also how all (?) Brits pronounce Gen Z in the American way (ie zee, not zed).
You sometimes see this with real live humans who have lived in multiple counties.
Also very common with... most Canadians. We officially use an English closer to British English (Zed not zee, honour not honor) however geographically and culturally we're very close to the US.
At school you learn "X, Y, Zed". The toy you buy your toddler is made for the US and Canadian market and sings "X, Y, Zee" as does practically every show on TV. The dictionary says it's spelled "colour" but most of the books you read will spell it "color". Most people we communicate with are either from Canada or the US, so much of our personal communication is with US English.
But also there are a number of British shows that air here, so some particularly British phrases do sneak in to a lot of people's lexicon.
See a similar thing in the way we measure things.
We use celsius for temperature but most of our thermostats default to Fahrenheit and most cookbooks are primarily in imperial measures and units because they're from the US. The store sells everything in grams and kilograms, but most recipes are still in tablespoons/cups/etc.
Most things are sold in metric, but when you buy lumber it's sold in feet, and any construction site is going to be working primarily in feet and inches.
If anything I expect any AI-written content would be more consistent about this than I usually am.
Pay no attention to those fopheads from Kent. We speak proper British English here in Essex
Some people are not from usa or England.
Bullet points hell, a table that feels it came straight out of grok.
I don't. This is simply the "drug dealer" model where the first hit is free. They know that once people are addicted, they will keep coming back.
The question of course is, will they keep coming back? I think they very much will. There are indications that GenAI adoption is already increasing labor producitivity labor improvements at a national scale, which is quite astounding for a technology just 3 years old: https://www.hackerneue.com/item?id=46061369
Imagine a magic box where you put in some money and get more productivity back. There is no chance Capitalism (with a capital "C") is going to let such a powerful growth machine wither on the vine. This mad AI rush is all about that.
I doubt it.
And what if the technology to locally run these systems without reliance on the cloud becomes commonplace, as it now is with open source models? The expensive part is in the training of these models more than the inference.