For instance, you could have an "agent" that can read/edit files on your computer by adding something like "to read a file, issue the `read_file $path`" to your prompt, and whenever a line of LLM output that starts with `read_file` is finished, the script running on your computer will read that file, paste it into the prompt, and let the LLM continue its autocomplete-on-steroids.
If you write enough tools and a complicated enough prompt, you end up with an LLM that can do stuff. By default, smart tools usually require user confirmation before actually doing stuff, but if you run the LLM in full agent mode, you trust the LLM not to do anything it shouldn't. curl2bash with LLMs, basically.
An LLM with significant training and access to file access, HTTP(S) API access, and access to some OS APIs can do a lot of work for you if you prompt it right. My experience with Claude/Copilot/etc. is that 75% of the time, the LLM will fail to do what it should be doing without manually repairing its mistakes, but in the other 25% of the time it does look rather sci-fi-ish.
With some tools you can tell your computer "take this directory, examine the EXIF data of each image, map the coordinates to the country and nearest town the picture was taken in, then make directories for each town and move the pictures to their corresponding locations". The LLM will type out shell commands (`ls /some/directory`), interpret the results as part of the prompt response that your computer sends back, and repeat that until its task has been completed. If you prepare a specific prompt and set of tools for the purpose of managing files, you could call that a "file management agent".
Generally, this works best for things you can do by hand in a couple of minutes or maybe an hour if it's a big set of images, but something the computer can now probably take care of you for you. That said, you're basically spending enough CO2 to drive to the store and back, so until we get more energy efficient data centers I'm not too fond of using these tools for banal interactions like that.
You can chain agents together into a string to accomplish larger tasks.
Think of everything involved in booking travel. You have set a budget, pick dates, chose a destinations, etc…. Each step can be defined as an agent and then you chain them together into a tool that handles the entire task for you.
prompt = user_input()
while prompt != "exit":
prompt = replace_tool_calls_with_results(call_llm(prompt))
With LLMs, this went through two phases of shittifaction: first, there was a window where the safety people were hopeful about LLMs because the weren’t agents, so everyone and their mother declared that they would create an agent out if an LLM explicitly because they heard it was dangerous.
This pleased the VCs.
Second, they failed to satisfy the original definition, so they changed the definition of agent to the thing that they made and declared victory. This pleased the VCs
""" 2.9 Potential for Risky Emergent Behaviors Novel capabilities often emerge in more powerful models.[61, 62] Some that are particularly concerning are the ability to create and act on long-term plans,[63] to accrue power and resources (“power- seeking”),[64] and to exhibit behavior that is increasingly “agentic.”[65] Agentic in this context does not intend to humanize language models or refer to sentience but rather refers to systems characterized by ability to, e.g., accomplish goals which may not have been concretely specified and 54 which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning. Some evidence already exists of such emergent behavior in models.[66, 67, 65] For most possible objectives, the best plans involve auxiliary power-seeking actions because this is inherently useful for furthering the objectives and avoiding changes or threats to them.19[68, 69] More specifically, power-seeking is optimal for most reward functions and many types of agents;[70, 71, 72] and there is evidence that existing models can identify power-seeking as an instrumentally useful strategy.[29] We are thus particularly interested in evaluating power-seeking behavior due to the high risks it could present.[73, 74]"""
(many probably know it, but not necessarily under this name)
Ref: Artificial Intelligence - A Modern Approach.
Were you thinking along these lines?
https://medium.com/@tahirbalarabe2/five-types-of-ai-agents-e...
Is for example Google’s crawl bot an agent?
Is there a prominent successful agent that I could test myself?
So many questions…