The Dawn of AI Agents

What are Artificial Intelligence Agents, and how can we benefit from them?

Diego Lopez Yse
11 min readMar 22, 2024
Photo by PIOTR BENE on Unsplash

Imagine a world where digital entities handle all types of tasks according to our needs, freeing up valuable time so we can pursue our interests and goals.

Artificial Intelligence Agents (AI Agents) represent digital entities that evaluate their environment, learn from their interactions, and make decisions to accomplish particular objectives. These entities are capable of executing tasks, comprehending the context, adjusting their strategies, and even developing new approaches to achieve their goals.

Today, the rise of AI Agents is fueled by the development of Large Language Models (LLMs), which help AI Agents interact and engage with their surroundings contextually.

What are Large Language Models (LLMs)?

LLMs are systems that take the context of an input (e.g., a text corpus) and predict the next output (e.g., the next word). They are designed to understand and generate text or other forms of content (images, audio, video) like a human, based on the vast amount of data used to train them.

General workflow of an LLM predicting the next word. The model would select the most likely word and add it to the sequence of prompts. Source: NVIDIA

After being trained on massive amounts of data, LLMs start to exhibit emerging abilities, like generating coherent and contextually relevant responses, translating to multiple languages, summarizing text, answering questions, and even assisting in creative writing or code generation tasks.

LLMs show remarkable abilities to solve new tasks from just a few examples or instructions, especially at scale. However, they often struggle with simple operations like arithmetic or fact-checking, areas where smaller, less complex models perform well.

But what if we could think about LLMs differently? Rather than thinking about LLMs as final objectives, we can envision them as components within larger, more sophisticated systems capable of solving more complex tasks. Shifting our perspective will pave the way for the concept of AI Agents.

AI Agents

Today, AI Agents have become very interesting because LLMs have become extremely capable and multimodal. AI Agents not only provide solutions or advice on human-presented problems but can also take action to resolve them.

An AI Agent is an autonomous entity capable of perceiving its surroundings, processing, and reasoning based on the acquired information. It can communicate and collaborate with other individuals to accomplish specific tasks, thereby impacting the external environment.

The anatomy of AI Agents

In a sense, AI Agents can be considered the product of thought and action. The structure of an AI Agent can be broadly defined through the following components:

  • Profile: AI Agents typically perform tasks by assuming specific roles, such as coders, teachers, and domain experts. Their profiles typically encompass basic information such as age, gender, and career, as well as psychological information, reflecting the personalities of the AI Agent, and social information, detailing the relationships between other AI Agents.
  • Memory: stores information perceived from the environment and leverages the recorded memories to facilitate future actions. The memory component can help the AI Agent accumulate experiences, self-evolve, and behave more consistently, reasonably, and effectively. Short-term memory temporarily buffers recent perceptions, while long-term memory consolidates important information over time.
  • Planning: when faced with a complex task, humans tend to deconstruct it into simpler subtasks and solve them individually. The planning component aims to empower AI Agents with such human capability, which is expected to make the AI Agent behave more reasonably, powerfully, and reliably.
  • Action: This component is responsible for translating the AI Agent’s decisions into specific outcomes and is located at the most downstream position, directly interacting with the environment. It is influenced by the profile, memory, and planning components.
To bridge the gap between traditional LLMs and AI Agents, a crucial aspect is to design rational AI Agent architectures to assist LLMs in maximizing their capabilities. Source: A Survey on Large Language Model based Autonomous Agents

AI Agents operate in environments that can be physical, virtual, or mixtures of both. You can even consider mechanisms of interaction that facilitate collaboration with humans to solve challenging tasks in complex real-world environments and enable the exploration of unseen environments in virtual reality.

Types of AI Agents

There are multiple ways of designing AI Agents, but we can define the main ones as:

  • Simple Reflex Agents operate based on condition-action rules, reacting directly to their immediate perceptions without an internal world model. They are effective and efficient in environments where the next action is entirely dependent on the current percept. However, their lack of complexity restricts their usefulness in unstructured, intricate environments.
  • Model-Based Reflex Agents maintain an internal world model, enabling them to monitor parts of the environment not immediately visible. This model aids them in handling partially observable environments by deducing missing data. Their decisions are influenced by both their current percept and internal model, enhancing their adaptability.
  • Goal-based Agents consider the future implications of their actions. They set goals and make decisions based on the probability of certain actions achieving these goals. Their ability to foresee allows them to plan and select actions that lead to desired results, making them suitable for complex decision-making tasks.
  • Utility-Based Agents evaluate the desirability of various states using a utility function. They aim to accomplish a goal and optimize their performance according to a specific utility measure. This method is useful in situations with multiple potential actions or results, where the AI Agent must determine the optimal path based on preferences.
  • Learning Agents enhance their performance over time through experience. They are especially beneficial in dynamic environments where they can adapt and evolve their strategies. For example, a learning AI Agent might continually improve its understanding of customer preferences to optimize ad placements.
  • Multi-Agent Systems represent several AI Agents that interact and pursue either shared or individual goals. This design is used to solve complex tasks that require multiple AI Agents to cooperate and coordinate.

Within the Multi-Agent Systems design, we can even define how the interaction between AI Agents should be. In Cooperative Multi-Agent Systems, individual AI Agents assess the needs and capabilities of other AI Agents and seek collaborative actions and information sharing with them. This approach can increase task efficiency, drive collective decision improvement, and solve complex real-world problems that one single AI Agent cannot solve independently, ultimately achieving the goal of synergistic complementarity. This type of cooperation can be:

  • Disordered: where AI Agents are free to express their perspectives and opinions openly, providing feedback and suggestions for modifying responses related to the task at hand in an uncontrolled and non-sequential way.
  • Ordered: where AI Agents in the system adhere to specific rules, like expressing their opinions in a sequential manner, allowing downstream AI Agents to focus on the outputs from upstream.

Alternatively, in Adversarial interactions, AI Agents can swiftly adjust strategies through dynamic interactions, striving to select the most advantageous or rational actions in response to changes caused by other AI Agents. This way, it is possible to foster change among AI Agents through competition, argumentation, and debate. By abandoning rigid beliefs and engaging in thoughtful reflection, adversarial interaction enhances the quality of responses.

In cooperative interaction, AI Agents collaborate in either a disordered or ordered manner to achieve shared objectives. In adversarial interaction, AI Agents compete in a tit-for-tat fashion to enhance their respective performance. Source: The Rise and Potential of Large Language Model Based Agents: A Survey

Some implementations

AI Agents can do more than provide answers.

Auto-GPT is an open-source AI Agent that uses LLMs to act autonomously. It follows up on initial prompts, both asking and answering them until a task is complete. With this AI Agent, a user tells Auto-GPT what their goal is, and the AI Agent, in turn, uses an LLM and several programs to carry out every step needed to achieve whatever goal they have set. What makes Auto-GPT reasonably capable is its ability to interact with apps, software, and services both online and locally, like web browsers and word processors.

Toolformer is another good example of AI Agents considered as “LLMs + tools” (tools represent functions that the LLM can decide to execute). Toolformer shows that LLMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds. The model decides which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction.

Toolformer autonomously decides to call different APIs (from top to bottom: a question-answering system, a calculator, a machine translation system, and a Wikipedia search engine) to obtain information that is useful for completing a piece of text. Source: AI Scholar

GPT-Researcher is an AI Agent designed for comprehensive online research on a variety of tasks. It can produce detailed, factual, and unbiased research reports, with customization options for focusing on relevant resources, outlines, and lessons. The main idea is to run “planner” and “execution” AI Agents where the planner generates questions to research, and the execution seeks the most related information based on each generated research question. Finally, the planner AI Agent filters and aggregates all related information and creates a research report.

Architecture of GPT-Researcher. First, create a domain-specific AI Agent based on a research query or task. Then, generate a set of research questions that together form an objective opinion on any given task. For each research question, trigger a crawler AI Agent that scrapes online resources for information relevant to the given task. For each scraped resource, summarize based on relevant information and keep track of its sources. Finally, filter and aggregate all summarized sources and generate a final research report. Source: GPT-Researcher

In a different application, the company Cognition released Devin, claiming it to be the world’s first fully autonomous AI software engineer. With this AI Agent, software engineers can focus on more interesting problems, and engineering teams can strive for more ambitious goals. As its creators state, Devin can learn how to use unfamiliar technologies, build and deploy apps end to end, autonomously find and fix bugs in codebases, train and fine-tune its own AI models, address bugs and feature requests in open source repositories, and contribute to mature production repositories.

A few months ago, and for the first time ever, an AI Agent designed, planned, and executed a chemistry experiment. Coscientist is an AI Agent driven by GPT-4 that can autonomously design, plan, and perform complex experiments by incorporating LLMs empowered by tools such as the internet and documentation search, code execution, and experimental automation. How it works? A scientist could ask Coscientist to find a compound with given properties. The AI Agent scours the Internet, documentation data, and other available sources, synthesizes the information and selects a course of experimentation that uses robotic APIs. The experimental plan is then sent to and completed by automated instruments. In all, a human working with the system can design and run an experiment much more quickly, accurately, and efficiently than a human alone.

a, Coscientist is composed of multiple modules that exchange messages. Boxes with blue background represent LLM modules, the Planner module is shown in green, and the input prompt is in red. White boxes represent modules that do not use LLMs. b, Types of experiments performed to demonstrate the capabilities when using individual modules or their combinations. c, Image of the experimental setup with a liquid handler. UV-Vis, ultraviolet visible. Source: Nature

Jürgen Schmidhuber and other researchers studied the concept of natural language-based societies of minds (NLSOMs) consisting of LLMs and other models communicating through a natural language interface. An NLSOM is composed of (1) several AI Agents — each acting according to their own objective (function) — and (2) an organizational structure that governs the rules determining how AI Agents may communicate and collaborate with each other. The AI Agents within the NLSOM are entities that can perceive, process, and transmit unimodal and multimodal information. The organizational structure of the society includes concepts such as the relationship structure of the AI Agents, the communication connectivity between them, and the information transmission path. Different AI Agents have different perceptual abilities, which may be entirely unrelated to their communication interface; some AI Agents may understand images and talk in audio files, while others may only understand refined programmatic descriptions of 3D objects and communicate in images.

An NLSOM consists of many AI Agents, each acting according to their own objectives and communicating with one another primarily through natural language according to some organizational structure. Source: Mindstorms in Natural Language-Based Societies of Mind

What’s next?

AI Agents have deep implications, and I believe they will continue growing and expanding way beyond our current conception.

What is the impact of AI Agents synthesizing new molecules in self-driving labs? How will science move forward with AI Agents unifying and discovering new knowledge from millions of data sources? What are the implications for humankind?

We can’t know what will happen in the future, but we can prepare for it. In that sense, OpenAI developed practices for keeping agentic AI systems safe and accountable:

Evaluating Suitability for the Task

Either the system deployer or the user should thoroughly assess whether or not a given AI model and associated agentic AI system is appropriate for their desired use case: whether it can execute the intended task reliably across the range of expected deployment conditions (or, to the extent reliability is not necessary or expected given the low stakes of the task and the nature of the user interface, that user expectations are suitably established via that interface). This raises the question of how to evaluate an agentic AI system properly, and what failure modes can and cannot be foreseen by sufficient testing.

Constraining the Action Space and Requiring Approval

Some decisions may be too important for users to delegate to AI Agents, if there is even a small chance that they’re done wrong (such as independently initiating an irreversible large financial transaction). Requiring a user to proactively authorize these actions, thus keeping a “human-in-the-loop”, is a standard way to limit egregious failures of agentic AI systems. This raises the key challenge of how a system deployer should ensure that the user has enough context to sufficiently understand the implications of the action they’re approving.

Setting Agents’ Default Behaviors

Model developers could significantly reduce the likelihood of the agentic AI system causing accidental harm by proactively shaping the models’ default behavior according to certain design principles. For instance, user interactions with agentic AI systems may be designed to begin with a prompt to the user to communicate their goals and preferences to the system. This preference information will almost always be unclear or incomplete, so it is still valuable for the AI Agent to have a set of default common-sense background preferences that allow it to “fill in the gaps” without a user’s guidance, such as “users prefer if I don’t spend their money.”

Legibility of Agent Activity

The more a user is aware of the actions and internal reasoning of their AI Agents, the easier it can be for them to notice that something has gone wrong and intervene, either during operation or after the fact. Revealing an AI Agent’s “thought process” to the user enables them to spot errors (including identifying when a system is pursuing the wrong goal), allows for subsequent debugging, and instills trust when deserved.

Automatic Monitoring

In practice, human users may not always have the time to go through the AI Agent activity logs exposed by the system deployer at the speed or scale they desire. To address this, users or system deployers can set up a second “monitoring” AI system that automatically reviews the primary agentic AI system’s reasoning and actions to check that they’re in line with expectations given the user’s goals.

Attributability

In cases where preventing intentional or unintentional harms at the level of the user or system deployer is infeasible (such as a criminal operating an AI Agent to scam a third party), it may still be possible to deter harm by making it likely that the user would have it traced back to them. With the creation of reliable attribution, it could become possible to have reliable accountability.

Interruptibility and Maintaining Control

Interruptibility (the ability to “turn an AI Agent off”), while crude, is a critical backstop for preventing an AI system from causing accidental or intentional harm. System deployers could be required to make sure that a user can always activate a graceful shutdown procedure for its AI Agent at any time: both for halting a specific category of actions (revoking access to, e.g., financial credentials) and for terminating the AI Agent’s operation more generally.

Interested in these topics? Follow me on LinkedIn or Twitter

--

--

No responses yet