I feel like as an end user I’d like to be able to do more to shape the LLM behavior. For example, I’d like to flag the dead end paths so they’re properly dropped out of context and not explored again, unless I as a user clear the flag(s).
I know there is work being done on LLM “memory” for lack of a better term but I have yet to see models get more responsive over time with this kind of feedback. I know I can flag it but right now it doesn’t help my “running” context that would be unique to me.
I have a similar thought about LLM “membranes”, which combines the learning from multiple users to become more useful, I am keeping a keen eye on that as I think that will make them more useful on a organizational level
nomel
Any good chat client will let you not only modify previous messages in place, but also modify the LLM responses, and regenerate from any point.
xwolfi
At some point, shouldn't these things start understanding what they're doing ?
I know there is work being done on LLM “memory” for lack of a better term but I have yet to see models get more responsive over time with this kind of feedback. I know I can flag it but right now it doesn’t help my “running” context that would be unique to me.
I have a similar thought about LLM “membranes”, which combines the learning from multiple users to become more useful, I am keeping a keen eye on that as I think that will make them more useful on a organizational level