Preferences

QuadmasterXLII parent
Agent originally meant an ai that made decisions to optimize some utility function. This was seen as a problem: we don’t know how to pick a good utility function or even how to point an ai at a specific utility function, so any agent that was smarter than us was as likely as not to turn us all into paperclips, or carve smiles into our faces, or some other grim outcome.

With LLMs, this went through two phases of shittifaction: first, there was a window where the safety people were hopeful about LLMs because the weren’t agents, so everyone and their mother declared that they would create an agent out if an LLM explicitly because they heard it was dangerous.

This pleased the VCs.

Second, they failed to satisfy the original definition, so they changed the definition of agent to the thing that they made and declared victory. This pleased the VCs


adastra22
"Agent" is a word with meaning that predates the LessWrong crowd. It is just an AI tool that performs actions to achieve its goal. That is all.
QuadmasterXLII OP
It had a meaning that predated the LessWrong crowd, but the LessWrong meaning had taken over pretty completely as of the GPT-4 paper, only to get swamped again by the new "agentic is good actually" wave. From the GPT-4 paper:

""" 2.9 Potential for Risky Emergent Behaviors Novel capabilities often emerge in more powerful models.[61, 62] Some that are particularly concerning are the ability to create and act on long-term plans,[63] to accrue power and resources (“power- seeking”),[64] and to exhibit behavior that is increasingly “agentic.”[65] Agentic in this context does not intend to humanize language models or refer to sentience but rather refers to systems characterized by ability to, e.g., accomplish goals which may not have been concretely specified and 54 which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning. Some evidence already exists of such emergent behavior in models.[66, 67, 65] For most possible objectives, the best plans involve auxiliary power-seeking actions because this is inherently useful for furthering the objectives and avoiding changes or threats to them.19[68, 69] More specifically, power-seeking is optimal for most reward functions and many types of agents;[70, 71, 72] and there is evidence that existing models can identify power-seeking as an instrumentally useful strategy.[29] We are thus particularly interested in evaluating power-seeking behavior due to the high risks it could present.[73, 74]"""

adastra22
Maybe in some communities? Agent has been a standard term of art in computer science (even outside of AI) for half a century.
seba_dos1
Who remembers what Microsoft Agent was?

(many probably know it, but not necessarily under this name)

adastra22
Clippy?
seba_dos1
Yes - the technology behind such glorious things as Office Assistants, Windows Search Assistants or BonziBuddy. Included in Windows 2000 up to Vista, with roots in Microsoft Bob.
_Algernon_
This isn't strictly speaking true. An agent is merely something that acts (on its environment). A simple reflex agent (eg. simple robot vacuum with only reflexive collision detection) are also agents, though they don't strictly speaking attempt to maximize a utility function.

Ref: Artificial Intelligence - A Modern Approach.

Thanks to your comment I came across this article, which I think explains agents quite well. Some differences seem artificial, but it gets the point across.

Were you thinking along these lines?

https://medium.com/@tahirbalarabe2/five-types-of-ai-agents-e...

_Algernon_
Yes. This is in essence the same taxonomy used in A Modern Approach.
QuadmasterXLII OP
"Agent" in the context of LLMs has always been pretty closely intertwined with advertising how dangerous they are (exciting!), as opposed to connecting to earlier research on reflexes. The first viral LLM agent, AutoGPT, had the breathless " (skull and crossbones emoji) Continuous Mode Run the AI without user authorisation, 100% automated. Continuous mode is not recommended. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise. Use at your own risk. (Warning emoji)" in its readme within a week of going live, and was forked into ChaosGPT a week later with the explicit goal of going rogue and killing everyone
_Algernon_
I'm responding to this claim:

>Agent originally meant an ai that made decisions to optimize some utility function.

That's not what agents originally referred to, and I don't understand how your circling back to LLMs is relevant to the original definition of agent?

Mordisquitos
In other words, VC-backed tech companies decided to weaken the definition of 'Torment Nexus' after they failed to create the Torment Nexus inspired by the classic sci-fi novel 'Don't Create the Torment Nexus'.

This item has no comments currently.