The gap is huge.
Sometimes. When the stars align and you roll the dice the right way. I'm currently using ChatGPT 5.1 to put together a list of meals for the upcoming week, it comes up with a list(very good one!), then it asks if I want a list of ingredients, I say yes, and the ingredients are completely bollocks. Like it adds things which are not in any recipe. I ask about it, it says "sorry, my mistake, here's the list fixed now" and it just removed that thing but added something else. I ask why is that there, and I shit you not, it replied with "I added it out of habit" - like what habit, what an idiotic thing to say. It took me 3 more attempts to get a list that was actually somewhat correct, although it got the quantities wrong. "infinitely better than a human at text based tasks" my ass.
I would honestly trust a 12 year old child to do this over this thing I'm supposedly paying £18.99/month for. And the company is valued at half a trillion dollars. I honestly wonder if I'm the bigger clown or if they are.
What they also don't have is agency to just decide to quit, for example.
I’m a super specialist in statistics and GPT5 and Gemini know much more than me about the topic.
What they lack are arms to interact with the physical world, but once this is done this is a giant leap forward (example: they will obviously be able to do experiments to discover new molecules by translating their steps-by-steps reasoning to physical actions, to build more optimized cars, etc).
For now human is smarter in some real-world or edge cases (e.g. super specialist in a specific science), but for any scientific task an average human is very very weak compared to the LLMs.