Preferences

It's more hallucination in the sense that all LLM output is hallucination. CoT is not "what the llm is thinking". I think of it as just creating more context/prompt for itself on the fly, so that when it comes up with a final response it has all that reasoning in its context window.

Exactly, whether or not it’s the “actual thought” of the model, it does influence its final output, so it matters to the user.
> it does influence its final output

We don't really know that. So far CoT is only used to sell LLMs to the user. (Both figuratively as a neat trick and literally as a way to increase token count.)

Not even remotely true. It's part of the context window, so it greatly influences the final output. CoT is tokens generated by the LLM just like normal output.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal