Preferences

If that was evidence current AI don't reason, then the Thatcher effect would be evidence that humans don't: https://en.wikipedia.org/wiki/Thatcher_effect

LLMs may or may not "reason", for certain definitions of the word (there are many), but this specific thing doesn't differentiate them from us.


Being tricked by optical illusions is more about the sensory apparatus and image processing faculties than reasoning, but detecting optical illusions is definitely a reasoning task. I doubt it's an important enough task to train into general models though.

This item has no comments currently.