Preferences

Not sure why you're getting downvoted.

If you speak with AI researchers, they all seem reasonable in their expectations.

... but I work with non-technical business people across industries and their expectations are NOT reasonable. They expect ChatGPT to do their entire job for $20/month and hire, plan, budget accordingly.

12 months later, when things don't work out, their response to AI goes to the other end of the spectrum -- anger, avoidance, suspicion of new products, etc.

Enough failures and you have slowing revenue growth. I think if companies see lower revenue growth (not even drops!), investors will get very very nervous and we can see a drop in valuations, share prices, etc.


> their expectations are NOT reasonable. They expect ChatGPT to do their entire job for $20/month and hire, plan, budget accordingly.

This is entirely on the AI companies and their boosters. Sam Altman literally says gpt 5 is "like having a team of PhD-level experts in your pocket." All the commercials sell this fantasy.

This is really the biggest red flag, non-technical people's (by extension investors and policymakers) general lack of understanding of the technology and its limitations.

Of course the valuation is going to be insanely inflated if investors think they are investing in literal magic.

I would blame the business people for being so gullible too.
There's some blame there, sure. But generally people would agree that between a con man and his victims, the con man has the greater moral failing.
In general yes. But here we talk about businessmen who are paid quite a lot of money literally to make decisions like these.

It is kind of like when a cop allows his gun to be stolen. Yes, the criminal is the guilty, but also the cop was the one person supposed to guard against it.

I mean, the AI companies have £200 a month plans for a reason. And if you look at Blitzy for example, their plans sit at the £1000 a month mark.
> If you speak with AI researchers, they all seem reasonable in their expectations.

An extraordinary claim for which I would like to see the extraordinary evidence. Because every single interview still available on YT form 3 years ago ...had these researchers putting AGI 3 to 5 years out ...A complete fairy tale as the track to AGI is not even in sight.

If you want to colonize the Solar System the track is clear. If you to have Fusion, the track is clear. AGI track ?

Fair point, and I should be more clear. The AI researchers I speak with don't expect AGI and are more reasonable in trying to build good tech rather than promising the world. My point was that these AI researchers aren't the ones inflating the bubble.
I'm not sure either - for a second I thought perhaps llm agents are prowling around to ensure the right messages are floating up, but who knows...

This item has no comments currently.