Subsymbolic AI
AI approaches that do not use explicit symbolic representation of knowledge but instead rely on distributed, often neural network-based methods to process and learn from data.
Subsymbolic AI encompasses techniques such as neural networks, genetic algorithms, and connectionist models that process information at a granular, sub-symbolic level. Unlike symbolic AI, which manipulates high-level human-readable symbols and rules, subsymbolic methods operate through interactions of many small, interconnected units. These units, often neurons in neural networks, collectively encode information in a distributed fashion, allowing for adaptive learning, pattern recognition, and generalization from large datasets. Subsymbolic AI excels in tasks where pattern recognition, sensory data processing, and prediction are required, such as image and speech recognition, autonomous driving, and natural language processing.
The concept of subsymbolic AI gained prominence in the 1980s with the resurgence of interest in neural networks, particularly after the development of the backpropagation algorithm in 1986 by Rumelhart, Hinton, and Williams, which significantly improved the training of multi-layer perceptrons.
Significant figures in the development of subsymbolic AI include Geoffrey Hinton, known for his work on neural networks and deep learning; David Rumelhart, who co-developed the backpropagation algorithm; and John Hopfield, whose work on Hopfield networks contributed to the field of neural computing. Their contributions have laid the foundation for modern subsymbolic AI techniques and their applications.