The precise mechanism LLMs use for reaching their probability distributions is why they are able to pass most undergraduate level exams, whereas the Markov chain projects I made 15-20 years ago were not.
Even as an intermediary, word2vec had to build a space in which the concept of "gender" exists such that "man" -> "woman" ~= "king" -> "queen".
Maybe I'm asking for an explanation :)
Since you seem to understand the mechanism, can you do a 3 line summary please?
Make a bunch of neural nets to recognise every concept, the same way you would make them to recognise numbers or letters in handwiting recognition. Glue them together with more neural nets. Put another on the end to turn concepts back into words.
For a less wrong but still introductory summary that still glosses over stuff, about 1.5 hours of 3blue1brown videos, #4-#8 in this playlist: https://youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_...
... Oh interesting. And those concepts are hand picked or generated automatically somehow?
> For a less wrong but still introductory summary that still glosses over stuff, about 1.5 hours of 3blue1brown videos
Sorry, my religion forbids me from watching talking heads. I'll have to live with your summary for now. Until I run into someone who condensed those 1.5 hours in text that takes at most 30 min to read...
"The user has requested 'remind me to pay my bills 8 PM tomorrow'. The current date is 2025-02-24. Your available commands are 'set_reminder' (time, description), 'set_alarm' (time), 'send_email' (to, subject, content). Respond with the command and its inputs."
And the most likely response will be what the user wanted.A Markov chain (only using the probabilities of word orders from sentences in its training set) could never output a command that wasn't stitched together from existing ones (i.e. it would always output a valid command name, but if no one had requested a reminder for a date in 2026 before it was trained, it would never output that year). No amount of documents saying "2026 is the year after 2025" would make a Markov chain understand that fact, but LLMs are able to "understand" that.