What more are LLMs than statistical inference machines? I don't know that I'd assert that's all they are with confidence but all the configurations options I can play with during generation (Top K, Top P, Temperature, etc.) are all ways to _not_ select the most likely next token which leads me to believe that they are, in fact, just statistical inference machines.
It's not an argument - it's a dismissal. It's boneheaded refusal to think on the matter in any depth, or consider any of the implications.
The main reason to say "LLMs are just next token predictions" is to stop thinking about all the inconvenient things. Things like "how the fuck does training on piles of text make machines that can write new short stories" or "why is a big fat pile of matrix multiplications better at solving unseen math problems than I am".
I'm an SWE working in AI-related development so I have a probably higher baseline of understanding than most, but even I end up awed sometimes. For example, I was playing a video game the other night that had an annoying box sliding puzzle in it (you know, where you've got to move a piece to specific area but it's blocked by other pieces that you need to move in some order first). I struggled with it for way too long (because I missed a crucial detail), so for shits and giggles I decided to let ChatGPT have a go at it.
I took a photo of the initial game board on my tv and fed it into the high thinking version with a bit of text describing the desired outcome. ChatGPT was able to process the image and my text and after a few turns generated python code to solve it. It didn't come up with the solution, but that's because of the detail I missed that fundamentally changed the rules.
Anyway, I've been in the tech industry long enough that I have a pretty good idea of what should and shouldn't be possible with programs. It's absolutely wild to me that I was able to use a photo of a game board and like three sentences of text and end up with an accurate conclusion (that it was unsolvable based on the provided rules). There's so much more potential with these things than many people realize.
They can process 2 megabytes of C sources, but not 2 sentences of natural language instructions. They find it easy to multiply 10-digit numbers but not to tell a picture of a dog from one of a cat. Computers are inhuman, in a very fundamental way. No natural language understanding, no pattern recognition, no common sense.
Machine learning was working to undermine that old assumption for a long time. But LLMs took a sledgehammer to it. Their capabilities are genuinely closer to "what humans can usually do" than to "what computers can usually do", despite them running on computers. It's a breakthrough.
Calculation isn't what makes us special; that's down to things like consciousness, self-awareness and volition.
> The main reason to say "LLMs are just next token predictions" is to stop thinking about all the inconvenient things. Things like...
They do it by iteratively predicting the next token.
Suppose the calculations to do a more detailed analysis were tractable. Why should we expect the result to be any more insightful? It would not make the computer conscious, self-aware or motivated. For the same reason that conventional programs do not.
You don't know that. It's how the llm presents, not how it does things. That's what I mean by it being the interface.
There's ever only one word that comes out of your mouth at a time, but we don't conclude that humans only think one word at a time. Who's to say the machine doesn't plan out the full sentence and outputs just the next token?
I don't know either fwiw, and that's my main point. There's a lot to criticize about LLMs and, believe or not, I am a huge detractor of their use in most contexts. But this is a bad criticism of them. And it bugs me a lot because the really important problems with them are broadly ignored by this low-effort, ill-thought-out offhand dismissal.
Yes. We know that LLMs can be trained by predicting the next token. This is a fact. You can look up the research papers, and open source training code.
I can't work it out, are you advocating a conspiracy theory that these models are trained with some elusive secret and that the researchers are lying to you?
Being trained by predicting one token at a time is also not a criticism??! It is just a factually correct description...
Very much so. Decades.
> Being trained by predicting one token at a time is also not a criticism??! It is just a factually correct description...
Of course that's the case. The objection I've had from the very first post in this thread is that using this trivially obvious fact as evidence that LLMs are boring/uninteresting/not AI/whatever is missing the forest for the trees.
"We understand [the I/Os and components of] LLMs, and what they are is nothing special" is the topic at hand. This is reductionist naivete. There is a gulf of complexity, in the formal mathematical sense and reductionism's arch-enemy, that is being handwaved away.
People responding to that with "but they ARE predicting one token at a time" are either falling into the very mistake I'm talking about, or are talking about something else entirety.
Because if not, it's worthless philosophical drivel. If it can't be defined, let alone measured, then it might as well not exist.
What is measurable and does exist: performance on specific tasks.
And the pool of tasks where humans confidently outperform LLMs is both finite and ever diminishing. That doesn't bode well for human intelligence being unique or exceptional in any way.
The feeling is mutual:
> ... that doesn't bode well for human intelligence being unique or exceptional in any way.
My guess was that you argued that we "don't understand" these systems, or that our incomplete analysis matters, specifically to justify the possibility that they are in whatever sense "intelligent". And now you are making that explicit.
If you think that intelligence is well-defined enough, and the definition agreed-upon enough, to argue along these lines, the sophistry is yours.
> If it can't be defined, let alone measured
In fact, we can measure things (like "intelligence") without being able to define them. We can generally agree that a person of higher IQ has been measured to be more intelligent than a person of lower IQ, even without agreeing on what was actually measured. Measurement can be indirect; we only need accept that performance on tasks on an IQ test correlates with intelligence, not necessarily that the tasks demonstrate or represent intelligence.
And similarly, based on our individual understanding of the concept of "intelligence", we may conclude that IQ test results may not be probative in specific cases, or that administering such a test is inappropriate in specific cases.
Frontier models usually get somewhere between 90 and 125, including on unseen tasks. Massive error bars. The performance of frontier models keeps rising, in line with other benchmarks.
And, for all the obvious issues with the method? It's less of a worthless thing to do than claiming "LLMs don't have consciousness, self-awareness and volition, and no, not gonna give definitions, not gonna give tests, they just don't have that".
None of this is surprising? Like, I think you just lack a good statistical intuition. The amazing thing is that we have these extremely capable models, and methods to learn them. That process is an active area of research (as is much of statistics), but it is just all statistics...
At the core, they are just statistical modelling. The fact that statistical modelling can produce coherent thoughts is impressive (and basically vindicates materialism) but that doesn't change the fact it is all based on statistical modelling. ...? What is your view?
ANNs are arbitrary function approximators. The training process uses statistical methods to identify a set of parameters that approximate the function as best as possible. That doesn't necessarily mean that the end result is equivalent to a very fancy multi-stage linear regression. It's a possible outcome of the process, but it's not the only possible outcome.
Looking at a LLMs I/O structure and training process is not enough to conclude much of anything. And that's the misconception.
I'm not sure I follow. LLMs are probabilistic next-token prediction based on current context, that is a factual, foundational statement about the technology that runs all LLMs today.
We can ascribe other things to that, such as reasoning or knowledge or agency, but that doesn't change how they work. Their fundamental architecture is well understood, even if we allow for the idea that maybe there are some emergent behaviors that we haven't described completely.
> It's a possible outcome of the process, but it's not the only possible outcome.
Again, you can ascribe these other things to it, but to say that these external descriptions of outputs call into question the architecture that runs these LLMs is a strange thing to say.
> Looking at a LLMs I/O structure and training process is not enough to conclude much of anything. And that's the misconception.
I don't see how that's a misconception. We evaluate all pretty much everything by inputs and outputs. And we use those to infer internal state. Because that's all we're capable of in the real world.
I think the reason people don't say that is because they want to say "I already understand what they are, and I'm not impressed and it's nothing new". But what the comment you are replying to is saying is that the inner workings are the important innovative stuff.
LLMs are probabilistic or non-deterministic computer programs, plenty of people say this. That is not much different than saying "LLMs are probabilistic next-token prediction based on current context".
> I think the reason people don't say that is because they want to say "I already understand what they are, and I'm not impressed and it's nothing new". But what the comment you are replying to is saying is that the inner workings are the important innovative stuff.
But we already know the inner workings. It's transformers, embeddings, and math at a scale that we couldn't do before 2015. We already had multi-layer perceptrons with backpropagation and recurrent neural networks and markov chains before this, but the hardware to do this kind of contextual next-token prediction simply didn't exist at those times.
I understand that it feels like there's a lot going on with these chatbots, but half of the illusion of chatbots isn't even the LLM, it's the context management that is exceptionally mundane compared to the LLM itself. These things are combined with a carefully crafted UX to deliberately convey the impression that you're talking to a human. But in the end, it is just a program and it's just doing context management and token prediction that happens to align (most of the time) with human expectations because it was designed to do so.
The two of you seem to be implying there's something spooky or mysterious happening with LLMs that goes beyond our comprehension of them, but I'm not seeing the components of your argument for this.
I am very confused by your stance.
The aim of the function approximation is to maximize the likelihood of the observed data (this is standard statistical modelling), using machine learning (e.g., stochastic gradient decent) on a class of universal function approximators is a standard approach to fitting such a model.
What do you think statistical modelling involves?
This is pretty ironic, considering the subject matter of that blog post. It's a super-common misconception that's gained very wide popularity due to reactionary (and, imo, rather poor) popular science reporting.
The author parroting that with confidence in a post about Dunner-Krugering gives me a bit of a chuckle.