People are unplugging their brains and are not even aware that their questions cannot be answered by llms, I witnessed that with smart and educated people, I can't imagine how bad it's going to be during formative years
"I cannot imagine figuring out how to raise a newborn without ChatGPT. Clearly, people did it for a long time, no problem."
Basically he didn’t know much about newborn and relied on ChatGPT for answers. That was a self deprecating attempt on a late night show. Like every other freaking guests would do, no matter how cliché. With a marketing slant of course. He clearly said other people don’t need ChatGPT.
Given all of the replies on this thread, HN is apparently willing to stretch the truth if Sam Altman can be put under any negative light.
https://www.benzinga.com/markets/tech/25/12/49323477/openais...
At the same time, their interpretation doesn’t seem that far off. As per your comment, Sam said he “cannot imagine figuring out how” which is pretty close to admitting he’s clueless how anyone does it, which is what your parent comment said.
It’s the difference between “I don’t know how to paint” and “I cannot imagine figuring out how to paint”. Or “I don’t know how to plant a garden” and “I cannot imagine figuring out how to plant a garden”. Or “I don’t know how to program” and “I cannot imagine figuring out how to program”.
In the former cases, one may not know specifically how to do them but can imagine figuring those out. They could read a book, try things out, ask someone who has achieved the results they seek… If you can imagine how other people might’ve done it, you can imagine figuring it out. In the latter cases, it means you have zero idea where to start, you can’t even imagine how other people do it, hence you don’t know how anyone does do it.
The interpretation in your parent comment may be a bit loose (again, I disagree with the use of “literally”, though that’s a lost battle), but it is hardly unfair.
>Clearly, people did it for a long time, no problem.
In fact means Altman thinks the exact opposite of "he didn't know how anyone could raise a baby without using a chatbot" - what he means is that while it's not imaginable, people make do anyway, so clearly it very much is possible to raise kids without chatgpt.
What the gp did is the equivalent of someone saying "I don't believe this, but XYZ" and quoting them as simply saying they believe XYZ. People are eating it up though because it's a dig at someone they don't like.
Saying “no no, he didn’t mean everyone, he was only talking about himself” is not meaningfully better, he’s still encouraging everyone to do what he does and use ChatGPT to obsess about their newborn. It is enough of a representation of his own cluelessness (or greed, take your pick) to warrant criticism.
> The OpenAI CEO said he "got a great answer back" and was told that it was normal for his son not to be crawling yet.
To be fair, that is a relatable anxiety. But I can't imagine Altman having the same difficulties as normal parents. He can easily pay for round the clock childcare including during night-times, weekends, mealtimes, and sickness. Not that he does, necessarily, but it's there when he needs it. He'll never know the crushing feeling of spending all day and all night soothing a coughing, congested one-year-old whilst feeling like absolute hell himself and having no other recourse.
The stakes are too high and the amount you’re allowed to get wrong is so low. Having been through the infant-wringer myself yeah some people fret over things that aren’t that big of a deal, but some things can literally be life or death. I can’t imagine trying to vet ChatGPT’s “advice” while delirious from lack of sleep and still in the trenches of learning to be a parent.
But of course he just had to get that great marketing sound bite didn’t he?
I cannot believe someone will wonder how people managed to decode "my baby dropped pizza and then giggled" before LLMs. I mean, if someone is honestly terrified about the answer to this life-or-death question and cannot figure out life without an LLM, they probably shouldn't be a parent.
Then again, Altman is faking it. Not sure if what he's faking is this affectation of being a clueless parent, or of being a human being.
They will ask “how much water should my newborn drink?” That’s a dangerous thing to get wrong (outside of certain circumstances, the answer is “none.” Milk/formula provides necessary hydration).
They will ask about healthy food alternatives - what if it tells them to feed their baby fresh honey on some homemade concoction (botulism risk)?
People googled this stuff before, but a basic search doesn’t respond with you about how it’s right and consistently feed you emotionally bad info in the same fashion.
Raising a kid is really very natural and instinctive, it's just like how to make it sleep, what to feed it when, and how to wash it. I felt no terror myself and just read my book or asked my parents when I had some stupid doubt.
They feel like slightly more noisy cats, until they can talk. Then they become little devils you need to tame back to virtue.
He said they have no idea how to make money, that they’ll achieve AGI then ask it how to profit; he’s baffled that chatbots are making social media feel fake; the thing you mentioned with raising a child…
https://www.startupbell.net/post/sam-altman-told-investors-b...
https://techcrunch.com/2025/09/08/sam-altman-says-that-bots-...
https://futurism.com/artificial-intelligence/sam-altman-cari...
Seems reasonable to me. If it can't answer that it doesn't work well enough.
But yeah, I can imagine a multi-modal model actually might have more information and common sense than a human in a (for them) novel situation.
If only to say "don't be an idiot", "pick higher ground" . Or even just as a rubber duck!
I understand there are things a typical LLM can do and things that it cannot, this is mostly just because I figured it couldn’t do it and I just wanted to see what would happen. But the average person is not really given much information on the constraints and all of these companies are promising the moon with these tools.
Short version: It definitely did not have more common sense or information than a human, and we all know it sure would have given a very confident answer about conditions in the area to this person that were likely not correct. Definitely incorrect if it’s based off a photo.
In my experience when it has to crawl the Internet it’s particularly flaky. The other day I queried who won which awards in the game awards. 3 different models got it wrong, all of them omitted at least 2 categories. You could throw a rock on a search engine and find 80 lists ready to go.
https://www.aiweirdness.com/dont-use-ai-detectors-for-anythi...
If you ask an AI to grade an essay, it will grade the essay highest that it wrote itself.
What I have seen is ChatGPT and Claude battling it out, always correcting and finding fault with each other's output (trying to solve the same problem). It's hilarious.
https://www.pangram.com/blog/pangram-predicts-21-of-iclr-rev...
English article:
https://www.heise.de/en/news/38C3-AI-tools-must-be-evaluated...
If you speak German, here is their talk from 38c3: https://media.ccc.de/v/38c3-chatbots-im-schulunterricht
I will never understand why some people apparently think asking a chat bot whether text was written by a chat bot is a reasonable approach to determining whether text was written by a chat bot.