Active Inference
Theoretical framework in neuroscience and artificial intelligence that describes how agents infer and act to minimize their prediction errors about the state of the world.
Active inference is based on the principle that both perception and action are driven by the minimization of free energy, which is a measure of the difference between expected and actual sensory inputs. This framework extends the predictive coding model, where the brain is seen as a prediction machine, constantly updating its beliefs about the environment to reduce discrepancies between predictions and sensory inputs. In AI, active inference models agents that engage in actions not just to achieve specific goals but to refine their models of the world, effectively learning by reducing uncertainty. This approach integrates perception, action, and learning in a unified theory, providing a robust model for understanding cognitive processes and developing adaptive artificial agents.
The concept of active inference emerged around the early 2000s, with Karl Friston being a pivotal figure in its development. It gained traction in the mid-2010s as applications in neuroscience and AI began to reveal its potential for explaining brain function and enhancing machine learning algorithms.
The foremost contributor to active inference is Karl Friston, a neuroscientist whose work on the free energy principle laid the foundation for this framework. His research has been instrumental in shaping our understanding of how the brain minimizes uncertainty through prediction and action. Other notable contributors include researchers in computational neuroscience and AI who have applied and extended Friston's theories to various domains.