Agent
System capable of perceiving its environment through sensors and acting upon that environment to achieve specific goals.
In artificial intelligence, the concept of an "agent" refers to any system that can observe its environment, interpret these observations, and take actions to maximize its chances of success or utility. Agents are foundational in various AI domains, such as robotics, where they physically interact with their surroundings, and in software, where they make decisions or predictions based on data inputs. The intelligence of an agent is determined by its ability to make optimal decisions in complex environments, often modeled using algorithms from machine learning, decision theory, or artificial neural networks. Agents can be autonomous, semi-autonomous, or controlled by human input, and they may operate in deterministic or stochastic environments.
The use of the term "agent" in computing and AI contexts started gaining traction in the 1980s. The concept became increasingly popular in the 1990s with the rise of the internet and the development of multi-agent systems where multiple agents interact or collaborate to perform tasks.
Key figures in the development of agent theory include Stuart Russell and Peter Norvig, who significantly contributed to formalizing the theoretical framework of AI agents in their widely used textbook, "Artificial Intelligence: A Modern Approach." Their work helped to standardize the definitions and categorizations of agents within the AI community. Additionally, researchers like Michael Wooldridge and Nicholas Jennings have been instrumental in advancing multi-agent systems and their applications in AI.