balder1991 parent
No, because an agent doesn’t learn, it’s just continuing a story. A kid will learn from the experience and at the end will be a different person.
You just haven't added the right tool together with the right system/developer prompt. Add a `add_memory` and `list_memory` (or automatically inject the right memories for the right prompts/LLM responses) and you have something that can learn.
You can also take it a step further and add automatic fine-tuning once you start gathering a ton of data, which will rewire the model somewhat.
Perhaps it can improve but it can't learn because that requires thought. Would you say that a PID regulator can "learn"?
I guess it depends on what you understand "learn" to mean.
But in my mind, if I tell the LLM to do something, and it did it wrong, then I ask it to fix it, and if in the future I ask the same thing and it avoids the mistake it did first, then I'd say it had learned to avoid that same pitfall, although I know very well it hasn't "learned" like a human would, I just added it to the right place, but for all intents and purposes, it "learned" how to avoid the same mistake.