I do think you have a point that the lack of a working memory is a severe constraints, but I also think you are wrong that these models will remain a user interface to a static model rather than being given the ability to add working memory and form long term memories and reason with that.
I also think it's an entirely open question whether they are reasoning under a reasonable definition, in part because we don't have one, and I think any claim that they don't reason ironically comes from a lack of reasoning about the high degree of uncertainty and ambiguity we have with respect to what reasoning means and how to measure it.
One worthwhile definition would be the ability to recognise patterns in knowledge and apply them to new context to generate new knowledge. There is none of this kind of processing happening despite how believable some of the words sometimes are.
E.g. the ability to solve a problem in code and then translate it to a new made up programming language described to it would easily qualify to me.
And this is a task a whole lot of humans would be unable to carry out.
I'd be willing to bet a whole lot of humans would fail that test too, because a lot of people are really bad at applying a rule without practicing on examples first, and so often struggle to take feedback without examples. If they did, would you claim they can't reason?
Your claim to know that LLMs are not is not based in fact, but speculation that too me is itself not based in reasoning. Should I question your ability to reason because I don't think you've done so in this argument?
The point I imagine is that there is no reasoning going on at all. Some humans sometimes struggle with some reasoning, of course. That is completely irrelevant to whether LLMs reason.
Picking word sequences that are most likely acceptable based on a static model formed months ago is not reasoning. No model is being constructed on the fly, no patterns recognised and extrapolated.
There are useful things possible of course but these models will never offer more than a nice user interface to a static model. They don't reason.