Preferences

But AI is still a stochastic parrot with no actual intellectual capability... who actually believes otherwise? I figured most people had played with local models enough by now to understand that it's just math underneath. It's extremely useful, but laughably far from intelligence, as anyone who has attempted to use Claude et al for anything nontrivial knows.

“It’s just math underneath”

This quote is so telling. I’m going to be straight with you and this is my opinion so you’re free to disagree.

From my POV you are out of touch with the ground truth reality of AI and that’s ok because it has all changed so fast. Everything in the universe is math based and in theory even your brain can be fully modelled by mathematics… it’s a pointless quote.

The ground truth reality is that nobody and I mean nobody understands how LLMs work. This isn’t me making shit up, if you know transformers, if you know the industry and if you even listened to the people behind the technology who make these things… they all say we don’t know how AI works.

But we do know some things. We know it’s not a stochastic parrot because in addition to the failures we’ve seen plenty of successes to extremely complicated problems that are too non trivial for anything other than an actual intelligence to solve.

In the coming years reality will change so much that your opinion will flip. You might be so stubborn as to continue calling it a stochastic parrot but by then it will just be lip service. Your current reaction is normal given the paradigm shift happened so fast and so recently.

> The ground truth reality is that nobody and I mean nobody understands how LLMs work.

This is a really insane and untrue quote. I would, ironically, ask an LLM to explain how LLMs work. It's really not as complicated as it seems.

It's not an insane thing to say.

You can boil LLM's down to "next token predictor". But that's like boiling down the human brain to "synapses firing".

The point that OP is making I think, is that we don't understand how "next token prediction" leads to more emergent complexity.

The only thing we don't fully understand is how the ELIZA effect[0] has been known for 60 years yet people keep falling for it.

[0] https://en.wikipedia.org/wiki/ELIZA_effect

> The only thing we don't fully understand is

It seems clear you don't want to have a good faith discussion.

It's you claiming that we understand how LLM's work, while the researchers who built them say that we ultimately don't.

https://futurism.com/anthropic-ceo-admits-ai-ignorance

There’s tons more where that came from. Like I said lots of people are out of touch because the landscape is changing so fast.

What is baffling to me is that not only are you unaware of what I’m saying but you also think what I’m saying is batshit insane despite the fact that people in the center of it all who are creating these things SAY the same thing. Maybe it’s just terminology…understanding how t build an LLM is not the same as understanding why it works or how it works.

Either way I can literally provide tons and tons more of evidence to the contrary if you’re still not getting it: We do not understand how LLMs work.

Also you can prompt an LLM about whether or not we understand LLMs it should tell the same thing I’m saying along with explaining transformers to you.

That's a CEO of an AI company saying his product is really superintelligent and dangerous and nobody knows how it works and if you don't invest you're going to be left behind. That's a marketing piece, if you weren't aware.

Just because the restaurant says "World's Best Burgers" on its logo doesn't make it true.

Didn’t I say I have tons of evidence?

Here’s another: https://youtube.com/shorts/zKM-msksXq0?si=bVethH1vAneCq28v

Geoffrey Hinton father of AI who quit his job at Google to warn people about AI. What’s his motivation? Altruism.

Man it’s not even about people saying things. If you knew how transformers and LLMs work you would know even for the most basic model we do not understand how they work.

Try to use an LLM to a solve a novel problem or within a domain that can’t easily be googled.

It will just spew over-confident sycophantic vomit. There is no attempt to reason. It’s all worthless nonsense.

It’s a fancy regurgitation machine and that will go completely off the rails when it steps outside of it’s training area. That’s it.

I’ve seen it solve a complex domain specific problem and build a basis of code in 10 minutes what took a year for a human to do. And it did it better.

I’ve also seen it fuck up in the same way you describe. So do I weigh and balance these two pieces of contrasting evidence to form a logical conclusion? Or do I pick and choose one of pieces of evidence that is convenient to my world view? What should I do? Actually why don’t you tell me what you ended up doing?

Why does it even matter if it is a stochastic parrot? And whose to say that humans aren't also?

Imagine the empire state building was just completed, and you had a man yelling at the construction workers: "PFFT that's just a bunch of steel and bricks"

Are you serious? Sam Altman and a legion of Silicon Valley movers and shakers believe otherwise. How do you think they gather the billions to build those data centers. Are they right? Are you right? We don't really know, do we...
Sam Altman is the modern day PT Barnum. He doesn't believe a damn thing except "make more money for Sam Altman", and he's real good at convincing people to go along with his schemes. His actions have zero evidential value for whether or not AI is intelligent, or even whether it's useful.
Maybe not, but I was answering to "nobody believes", not to whether AI is intelligent or not (which might just be semantics anyway). Plenty believe, especially the insiders working on the tech, who know it much better than us. Take Ilya Sutskever, of "do you feel the AGI" fame. Labelling them all as cynical manipulators is delusional. Now, they might be delusional as well, at least to some degree - my bet is on the latter - but there are plenty of true believers out there and here on HN. I've debated them in the past. There are cogent arguments on either side.
"They convinced the investors so they must be right"
> Are you serious? Sam Altman and a legion of Silicon Valley movers and shakers believe otherwise. How do you think they gather the billions to build those data centers. Are they right? Are you right? We don't really know, do we...

The money is never wrong! That's why the $100 billion invested in blockchain companies from 2020 to 2023 worked out so well. Or why Mark Zuckerberg's $50 billion investment in the Metaverse resulted in a world-changing paradigm shift.

It's not that the money can predict what is correct, it's that it can tell us where people's values lie.

Those people who invested cash in blockchain believed that they could develop something worthwhile on the blockchain.

Zuckerberg believed the Metaverse could change things. It's why he hired all of those people to work on it.

However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.

There's another article posted here, "Believe the Checkbook" or something like that. And they point out that Anthropic had no reason to purchase Bun except to get the people working on it. And if you believe we're about to turn a corner on vibe coding, you don't do that.

> However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.

Very few people say this. But it’s realistically to say at the least in the next decade our jobs are going out the window.

The CEO of Nvidia is saying this.

So yeah, he's just "one guy", but in terms of "one guys", he's a notable one.

Someone also believed the internet would take over the world. They were right.

So we could be right or we could be wrong. What we do know is that from 2 years ago a lot of what people were saying or “believed” about LLMs are now categorically wrong.

Someone also believed the moon was made of green cheese. They were wrong.

And some of those beliefs they were wrong about is about when and how it will change things.

And my post is not about who is correct. It's about discerning what people truly believe despite what they might tell you up front.

People invested money into the internet. They hired people to develop it. That told you they believed it was useful to them.

This item has no comments currently.