Preferences

_Algernon_ parent
This isn't strictly speaking true. An agent is merely something that acts (on its environment). A simple reflex agent (eg. simple robot vacuum with only reflexive collision detection) are also agents, though they don't strictly speaking attempt to maximize a utility function.

Ref: Artificial Intelligence - A Modern Approach.


Thanks to your comment I came across this article, which I think explains agents quite well. Some differences seem artificial, but it gets the point across.

Were you thinking along these lines?

https://medium.com/@tahirbalarabe2/five-types-of-ai-agents-e...

_Algernon_ OP
Yes. This is in essence the same taxonomy used in A Modern Approach.
QuadmasterXLII
"Agent" in the context of LLMs has always been pretty closely intertwined with advertising how dangerous they are (exciting!), as opposed to connecting to earlier research on reflexes. The first viral LLM agent, AutoGPT, had the breathless " (skull and crossbones emoji) Continuous Mode Run the AI without user authorisation, 100% automated. Continuous mode is not recommended. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise. Use at your own risk. (Warning emoji)" in its readme within a week of going live, and was forked into ChaosGPT a week later with the explicit goal of going rogue and killing everyone
_Algernon_ OP
I'm responding to this claim:

>Agent originally meant an ai that made decisions to optimize some utility function.

That's not what agents originally referred to, and I don't understand how your circling back to LLMs is relevant to the original definition of agent?

This item has no comments currently.