Preferences

The paid models are already smarter than the vast majority of people.

Majority of people think they are better than average drivers.

Surely those models are not smarter than _you_, right?

Half of all drivers are better than the average driver. Half of all people have an IQ lower than 100.
If this is true then almost all of the jobs in the world would already be replaced by AI. Sure, the model might be better than most people at lots of things but there are still tasks that human children find easy and AI struggles with. I wouldn't call being better at something than humans being "smarter" than them; if you do, calculators must be smarter than humans since they have been better at adding up numbers than us for a long time.
For text and image-based tasks they are infinitely better than a human.

What they lack are arms to interact with the physical world, but once this is done this is a giant leap forward (example: they will obviously be able to do experiments to discover new molecules by translating their steps-by-steps reasoning to physical actions, to build more optimized cars, etc).

For now human is smarter in some real-world or edge cases (e.g. super specialist in a specific science), but for any scientific task an average human is very very weak compared to the LLMs.

There are forms of science that don't involve "arms". Why don't we see a single research paper involving research entirely undertaken by AI? AI development and research itself doesn't need "arms". Why don't we just put AI in a box and let it infintely improve itself? Why doesn't every company that employs someone who just uses a computer replace them with AI? Why are there no businesses entirely run by AIs that just tell humans what to do. Why don't the AIs just use CAD and electronic simulation to design themselves some "arms"? Why can't AI even beat basic videogames that children can beat?

The gap is huge.

>>For text and image-based tasks they are infinitely better than a human.

Sometimes. When the stars align and you roll the dice the right way. I'm currently using ChatGPT 5.1 to put together a list of meals for the upcoming week, it comes up with a list(very good one!), then it asks if I want a list of ingredients, I say yes, and the ingredients are completely bollocks. Like it adds things which are not in any recipe. I ask about it, it says "sorry, my mistake, here's the list fixed now" and it just removed that thing but added something else. I ask why is that there, and I shit you not, it replied with "I added it out of habit" - like what habit, what an idiotic thing to say. It took me 3 more attempts to get a list that was actually somewhat correct, although it got the quantities wrong. "infinitely better than a human at text based tasks" my ass.

I would honestly trust a 12 year old child to do this over this thing I'm supposedly paying £18.99/month for. And the company is valued at half a trillion dollars. I honestly wonder if I'm the bigger clown or if they are.

Sorry about the frustration. I agree they’re far from perfect (like us). They have habits, bc they model us.
There is a lot of learning involved in getting to be able run experiments in some areas.

What they also don't have is agency to just decide to quit, for example.

> super specialist in a specific science

I’m a super specialist in statistics and GPT5 and Gemini know much more than me about the topic.

> If this is true then almost all of the jobs in the world would already be replaced by AI.

We have to account for human inertia. With us people, very little completely changes over night.

At analyzing and reproducing language.. words, code etc sure because at their core they are still statistical models of language. But there seems to be growing consensus that intelligence requires modeling more than words.
When models become sufficiently sophisticated, they practically become the phenomenon they’re modeling (by limit).
But they don't have agency and who would trust them unattended anyway (at their current capabilities)?
They’re urgently being given agency by companies and people.
I could say the same for many people that I know.
Not sure why you are being downvoted. Seems like lot of people with high ego are not ready to accept the truth that a human has way less knowledge than a world encyclopedia with infinite and practically perfect memory.
Knowledge != Intelligence

Otherwise researching intelligence in animals would be a completely futile pursuit since they have no way of "knowing" facts communicated in human language.

Yeah, I think a lot of people are very insecure. I’m genuinely sorry for them. I think the best thing to do is to derive utility from AI (to mitigate the costs).
>>Seems like lot of people with high ego are not ready to accept the truth that a human has way less knowledge than a world encyclopedia.

Well, thank you for editing your own comment and adding that last bit, because it really is the crux of the issue and the reason why OP is being downvoted.

Having all of the worlds knowledge is not the same as being smart.

Smart is being able to produce knowledge quickly. I’m not sure how it could be denied that AIs are capable of producing knowledge quickly (obviously extremely quickly).
You really don't think being smart has anything to do with applying that knowledge?

The smartest kid in class was not the one that memorized the most facts.

This item has no comments currently.