""" 2.9 Potential for Risky Emergent Behaviors Novel capabilities often emerge in more powerful models.[61, 62] Some that are particularly concerning are the ability to create and act on long-term plans,[63] to accrue power and resources (“power- seeking”),[64] and to exhibit behavior that is increasingly “agentic.”[65] Agentic in this context does not intend to humanize language models or refer to sentience but rather refers to systems characterized by ability to, e.g., accomplish goals which may not have been concretely specified and 54 which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning. Some evidence already exists of such emergent behavior in models.[66, 67, 65] For most possible objectives, the best plans involve auxiliary power-seeking actions because this is inherently useful for furthering the objectives and avoiding changes or threats to them.19[68, 69] More specifically, power-seeking is optimal for most reward functions and many types of agents;[70, 71, 72] and there is evidence that existing models can identify power-seeking as an instrumentally useful strategy.[29] We are thus particularly interested in evaluating power-seeking behavior due to the high risks it could present.[73, 74]"""
Ref: Artificial Intelligence - A Modern Approach.
Were you thinking along these lines?
https://medium.com/@tahirbalarabe2/five-types-of-ai-agents-e...
With LLMs, this went through two phases of shittifaction: first, there was a window where the safety people were hopeful about LLMs because the weren’t agents, so everyone and their mother declared that they would create an agent out if an LLM explicitly because they heard it was dangerous.
This pleased the VCs.
Second, they failed to satisfy the original definition, so they changed the definition of agent to the thing that they made and declared victory. This pleased the VCs