Preferences

It sounds like you get that LLMs are just "next word" predictors. So the piece you may be missing is simply that behind the scenes, your prompt gets "rephrased" in a way that makes generating the response a simple matter of predicting the next word repeatedly. So it's not necessary for the LLM to "understand" your prompt the way you're imagining, this is just an illusion caused by extremely good next-word prediction.

In my simple mind "Who is the queen of Spain?" becomes "The queen of Spain is ...".

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal