This quote is so telling. I’m going to be straight with you and this is my opinion so you’re free to disagree.
From my POV you are out of touch with the ground truth reality of AI and that’s ok because it has all changed so fast. Everything in the universe is math based and in theory even your brain can be fully modelled by mathematics… it’s a pointless quote.
The ground truth reality is that nobody and I mean nobody understands how LLMs work. This isn’t me making shit up, if you know transformers, if you know the industry and if you even listened to the people behind the technology who make these things… they all say we don’t know how AI works.
But we do know some things. We know it’s not a stochastic parrot because in addition to the failures we’ve seen plenty of successes to extremely complicated problems that are too non trivial for anything other than an actual intelligence to solve.
In the coming years reality will change so much that your opinion will flip. You might be so stubborn as to continue calling it a stochastic parrot but by then it will just be lip service. Your current reaction is normal given the paradigm shift happened so fast and so recently.
This is a really insane and untrue quote. I would, ironically, ask an LLM to explain how LLMs work. It's really not as complicated as it seems.
You can boil LLM's down to "next token predictor". But that's like boiling down the human brain to "synapses firing".
The point that OP is making I think, is that we don't understand how "next token prediction" leads to more emergent complexity.
It seems clear you don't want to have a good faith discussion.
It's you claiming that we understand how LLM's work, while the researchers who built them say that we ultimately don't.
There’s tons more where that came from. Like I said lots of people are out of touch because the landscape is changing so fast.
What is baffling to me is that not only are you unaware of what I’m saying but you also think what I’m saying is batshit insane despite the fact that people in the center of it all who are creating these things SAY the same thing. Maybe it’s just terminology…understanding how t build an LLM is not the same as understanding why it works or how it works.
Either way I can literally provide tons and tons more of evidence to the contrary if you’re still not getting it: We do not understand how LLMs work.
Also you can prompt an LLM about whether or not we understand LLMs it should tell the same thing I’m saying along with explaining transformers to you.
Just because the restaurant says "World's Best Burgers" on its logo doesn't make it true.
Here’s another: https://youtube.com/shorts/zKM-msksXq0?si=bVethH1vAneCq28v
Geoffrey Hinton father of AI who quit his job at Google to warn people about AI. What’s his motivation? Altruism.
Man it’s not even about people saying things. If you knew how transformers and LLMs work you would know even for the most basic model we do not understand how they work.
It will just spew over-confident sycophantic vomit. There is no attempt to reason. It’s all worthless nonsense.
It’s a fancy regurgitation machine and that will go completely off the rails when it steps outside of it’s training area. That’s it.
I’ve also seen it fuck up in the same way you describe. So do I weigh and balance these two pieces of contrasting evidence to form a logical conclusion? Or do I pick and choose one of pieces of evidence that is convenient to my world view? What should I do? Actually why don’t you tell me what you ended up doing?
Imagine the empire state building was just completed, and you had a man yelling at the construction workers: "PFFT that's just a bunch of steel and bricks"
The money is never wrong! That's why the $100 billion invested in blockchain companies from 2020 to 2023 worked out so well. Or why Mark Zuckerberg's $50 billion investment in the Metaverse resulted in a world-changing paradigm shift.
Those people who invested cash in blockchain believed that they could develop something worthwhile on the blockchain.
Zuckerberg believed the Metaverse could change things. It's why he hired all of those people to work on it.
However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.
There's another article posted here, "Believe the Checkbook" or something like that. And they point out that Anthropic had no reason to purchase Bun except to get the people working on it. And if you believe we're about to turn a corner on vibe coding, you don't do that.
Very few people say this. But it’s realistically to say at the least in the next decade our jobs are going out the window.
So yeah, he's just "one guy", but in terms of "one guys", he's a notable one.
So we could be right or we could be wrong. What we do know is that from 2 years ago a lot of what people were saying or “believed” about LLMs are now categorically wrong.
And some of those beliefs they were wrong about is about when and how it will change things.
And my post is not about who is correct. It's about discerning what people truly believe despite what they might tell you up front.
People invested money into the internet. They hired people to develop it. That told you they believed it was useful to them.