Preferences

It's not an insane thing to say.

You can boil LLM's down to "next token predictor". But that's like boiling down the human brain to "synapses firing".

The point that OP is making I think, is that we don't understand how "next token prediction" leads to more emergent complexity.


The only thing we don't fully understand is how the ELIZA effect[0] has been known for 60 years yet people keep falling for it.

[0] https://en.wikipedia.org/wiki/ELIZA_effect

> The only thing we don't fully understand is

It seems clear you don't want to have a good faith discussion.

It's you claiming that we understand how LLM's work, while the researchers who built them say that we ultimately don't.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal