Radial Basis Function Network

Radial Basis Function Network

A type of artificial neural network that uses radial basis functions as activation functions.

Radial Basis Function Networks (RBFNs) are a variant of artificial neural networks that utilize radial basis functions as their activation functions, making them particularly effective for interpolation in multidimensional space. The network architecture consists of an input layer, a hidden layer with a radial basis function, and a linear output layer. RBFNs are prized for their ability to approximate any continuous multivariable function and are popular in time-series prediction, control systems, and function approximation due to their interpolation capabilities and fast learning speeds. They model the output as a linear combination of radial basis functions, each associated with its own center midpoint, and they typically employ Gaussian functions to transform the input space, offering unique versatility in certain problem domains compared to other network types.

The Radial Basis Function Network concept originated in the late 1980s, and it gained popularity throughout the 1990s as computational advancements allowed for more practical implementations in AI and ML applications, aligning with increasing interest in neural network approaches to function approximation and pattern recognition.

Key contributors to the development of Radial Basis Function Networks include John Moody and Christian Darken, who were instrumental in formalizing the theoretical underpinnings of RBFNs and demonstrating their efficacy in real-world applications. Their work laid the foundation for further exploration and refinement of these networks in various AI applications.

Newsletter