Preferences

> Yes, these responses are annoying, but what's your point?

The point I imagine is that there is no reasoning going on at all. Some humans sometimes struggle with some reasoning, of course. That is completely irrelevant to whether LLMs reason.

Picking word sequences that are most likely acceptable based on a static model formed months ago is not reasoning. No model is being constructed on the fly, no patterns recognised and extrapolated.

There are useful things possible of course but these models will never offer more than a nice user interface to a static model. They don't reason.


Why do you say that isn't reasoning, and what do you think human reasoning is?

I do think you have a point that the lack of a working memory is a severe constraints, but I also think you are wrong that these models will remain a user interface to a static model rather than being given the ability to add working memory and form long term memories and reason with that.

I also think it's an entirely open question whether they are reasoning under a reasonable definition, in part because we don't have one, and I think any claim that they don't reason ironically comes from a lack of reasoning about the high degree of uncertainty and ambiguity we have with respect to what reasoning means and how to measure it.

> Why do you say that isn't reasoning, and what do you think human reasoning is?

One worthwhile definition would be the ability to recognise patterns in knowledge and apply them to new context to generate new knowledge. There is none of this kind of processing happening despite how believable some of the words sometimes are.

To me, under this definitions LLMs are then clearly and obviously reasoning based on many conversations I've had.

E.g. the ability to solve a problem in code and then translate it to a new made up programming language described to it would easily qualify to me.

And this is a task a whole lot of humans would be unable to carry out.

If they were actually reasoning, tests like the GP's would show it. They don't connect dots, they can be prompted to select different pathways through their static model and that selection can be based on a pretty small context but nothing about that model changes. Tomorrow's conversation is only different based on rand(). LLMs have a very large static model and confusing that with reasoning is fairly common but still incorrect.
This is not valid logic. If they are reasoning, tests like GPs might show it. Failing the test, however, can have many other causes: They can just not be good enough at reasoning, or they might be failing because they see tokens and have had too little training to connect that to both the spelling in sounds in a way that generalize.

I'd be willing to bet a whole lot of humans would fail that test too, because a lot of people are really bad at applying a rule without practicing on examples first, and so often struggle to take feedback without examples. If they did, would you claim they can't reason?

Your claim to know that LLMs are not is not based in fact, but speculation that too me is itself not based in reasoning. Should I question your ability to reason because I don't think you've done so in this argument?

And so, what do you think reasoning is? Or how would you know if something can reason or not ?

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal