Yes but the agent reasoning is going to use an LLM, I sometimes run our openai_chat_agent example just to test things out. Try giving it a shot, ask it to do something then ask it to explain its tool use.
Obviously, it can (and sometimes will) hallucinate and make up why its using a tool. The thing is, we don't really have true LLM explainability so this is the best we can really do.
Obviously, it can (and sometimes will) hallucinate and make up why its using a tool. The thing is, we don't really have true LLM explainability so this is the best we can really do.