Preferences

birdfood parent
Perhaps related, after watching a talk by Gerald Sussman I loaded an image of the Kanizsa triangle into Claude and asked it a pretty vague question to see if it could “see” the inferred triangle. It recognised the image and went straight into giving me a summary about it. So I rotated the image 90 degrees and tried in a new conversation, it didn’t recognise the image and got the number of elements incorrect:

This image shows a minimalist, abstract geometric composition with several elements:

Four black shapes that appear to be partial circles or "Pac-Man" like forms, each with a wedge cut out, positioned in the four corners/quadrants of the image Two thin black triangular or arrow-like shapes - one pointing upward in the upper left area, and one pointing to the right in the center-right area All elements are arranged on a light gray or off-white background


latentsea
I guess they will now just rotate all the images in the training data 90 degrees too to fill this kind of gap.
recursivecaveat
Everything old is new again: in the Alexnet paper that kicked off the deep learning wave in 2012, they describe horizontally flipping every image as a cheap form of data augmentation. Though now that we expect models to actually read text that seems potentially counter-productive. Rotations are similar, in that you'd hope it would learn heuristics such as that the sky is almost always at the top.
latency-guy2
At least from when I was still doing this kind of work, look angle/platform angle scatterer signal (radar) mattered more than rotation, but rotation was a simple way to get quite a bit more samples. It never stopped being relevant :)
bonoboTP
That's called data augmentation. It was common alredy before AlexNet. And it never stopped being common, it's still commonly done.
mirekrusin
That's how you train neural network with synthetic data so it extracts actual meaning.

That's how humans also learn ie. adding numbers. First there is naive memoization, followed by more examples until you get it.

LLM training seems to be falling into memoization trap because models are extremely good at it, orders of magnitude better than humans.

IMHO what is missing in training process is this feedback explaining wrong answer. What we're currently doing with training is leaving out this understanding as "exercise to the reader". We're feeding correct answers to specific, individual examples which promotes memoization.

What we should be doing in post training is ditch direct backpropagation on next token, instead let the model finish its wrong answer, append explanation why it's wrong and continue backpropagation for final answer - now with explanation in context to guide it to the right place in understanding.

What all of this means is that current models are largely underutilized and unnecessarily bloated, they contain way too much memoized information. Making model larger is easy, quick illusion of improvement. Models need to be squeezed more, more focus needs to go towards training flow itself.

atwrk
> That's how humans also learn ie. adding numbers. First there is naive memoization, followed by more examples until you get it.

Just nitpicking here, but this isn't how humans learn numbers. They start at birth with competency up to about 3 or 5 and expand from that. So they can already work with quantities of varying size (i.e. they know which is more, the 4 apples on the left or the five on the right, and they also know what happens if I take one apple from the left and put it to the others on the right), and then they learn the numbers. So yes, they learn the numbers through memorization, but only the signs/symbols, not the numeric competency itself.

mirekrusin
Turtles all the way down, things like meaning of "more" is also memoized ie initially as "I want more food" etc. then refined with time, ie. kid saying "he's more than me" is corrected by explaining that there needs to be some qualifier for measurable quantity ie. "he's more tall (taller) than me" or "he is more fast (faster) than me" etc.

Using different modalities (like images, videos, voice/sounds instead of pure text) is interesting as well as it helps completing the meaning, adds sense of time etc.

I don't think we're born with any concepts at all, it's all quite chaotic initially with consistent sensory inputs that we use to train/stabilise our neural network. Newborns for example don't even have concept of separation between "me and the environment around me", it's learned.

atwrk
> I don't think we're born with any concepts at all, it's all quite chaotic initially with consistent sensory inputs that we use to train/stabilise our neural network.

That is exactly the thing that doesn't seem to be true, or at least it is considered outdated in neuroscience. We very much have some concepts that are inert, and all other concept we learned in relation to the things that are already there in our brains - at birth mostly sensorymotor stuff. We decidedly don't learn new concepts from scratch, only in relation to already acquired concepts.

So our brains work quite a bit different than LLMs, despite the neuron metaphor used there.

And regarding your food example, the difference I was trying to point out: For LLMs, the word and the concept, are the same thing. For humans they are different things that are also learned differently. The memorization part (mostly) only affects the word, not the concept behind it. What you described was only the learning of the word "tall" - the child in your example already knew that the other person was taller than them, it just didn't know how to talk about that.

mirekrusin
LLMs name became misnomer once we started directly adding different modalities. In that sense "word and concept" is not the same thing because multimodal LLM can express it in ie. image and sentence.
littlestymaar
And it will work.

I just whish the people believing LLM can actually reason and generalize see that they don't.

ben_w
If that was evidence current AI don't reason, then the Thatcher effect would be evidence that humans don't: https://en.wikipedia.org/wiki/Thatcher_effect

LLMs may or may not "reason", for certain definitions of the word (there are many), but this specific thing doesn't differentiate them from us.

Being tricked by optical illusions is more about the sensory apparatus and image processing faculties than reasoning, but detecting optical illusions is definitely a reasoning task. I doubt it's an important enough task to train into general models though.
latentsea
At this point think all reasoning really means is having seen enough of the right training data to make the correct inferences, and they're just missing some training data.
JohnKemeny
We really don't know how to compute.

Oct 2011, 30 comments.

https://www.hackerneue.com/item?id=3163473

Strange loop video:

July 2011, 36 comments.

https://www.hackerneue.com/item?id=2820118

Workaccount2
Show any LLM a picture of a dog with 5 legs watch them be totally unable to count.
pfdietz
Or watch them channel Abraham Lincoln.
iknownothow
As far as I can tell, the paper covers text documents only. Therefore your example doesn't quite apply.

It is well known that LLMs have a ways to go when it comes to processing images like they process text or audio.

I don't think there's any good performing multimodal model that accepts image pixels directly. Most vision capabilities are hacks or engineered in. An image undergoes several processing steps and each processor's outputs are fed to the transformer as tokens. This may happen in one network but there's non-transformer networks involved. Examples of preprocessing:

* OCR * CNNs (2D pattern recognizers) with different zooms, angles, slices etc * Others maybe too?

akomtu
To generalise this idea: if we look at a thousand points that more or less fill a triangle, we'll instantly recognize the shape. IMO, this simple example reveals what intelligence is really about. We spot the triangle because so much complexity - a thousand points - fits into a simple, low-entropy geometric shape. What we call IQ is the ceiling of complexity of patterns that we can notice. For example, the thousand dots may in fact represent corners of a 10-dimensional cube, rotated slightly - an easy pattern to see for a 10-d mind.
saithound
Cool. Since ChatGPT 4o is actually really good at this particular shape identification task, what, if anything do you conclude about its intelligence?
akomtu
Recognizing triangles isn't that impressive. What's the ceiling of complexity of patterns in data it can identify with is the real question. Give it a list of randomly generated xyz coords that fall on a geometric shape, or a list of points that sample a trajectory of Earth around Sun. Will it tell you that it's an ellipse? Will it derive the 2nd Newton's law? Will it notice the deviation from the ellipse and find the rule explaining it?
JohnKemeny
The entire point here is that LLMs and image recognition software is not managing this task, so, not really good at this particular shape identification task.
saithound
No, the post's article is not about the sort of shape identification task discussed by GP. Or indeed any image recognition task: it's a paper about removed context in language.

Fwiw, I did test GP's task on ChatGPT 4o directly before writing my comment. It is as good at it as any human.

This item has no comments currently.