Preferences

Yeah, if an LLM was truly capable of reasoning, then whenever it makes a mistake, e.g. due to randomness or due to lack of knowledge, then pointing out the mistakes and giving steps on correcting the mistakes should result in basically a 100% success rate, since the assistant has infinite capacity to accommodate the LLM's weaknesses.

When you look at things like https://arxiv.org/abs/2408.06195 you notice that the amount of tokens needed to solve trivial tasks is somewhat ridiculous. On the order of 300k tokens for a simple grade school problem. That is roughly three hours at a rate of 30 token/s. You could fill 400 pages of a book with that many tokens.


This item has no comments currently.