Feed Forward

Feed Forward

Essential structure of an artificial neural network that directs data or information from the input layer towards the output layer without looping back.

Feed Forward is a crucial concept in artificial neural networks, primarily used in machine learning (ML) and cognitive computing. It describes network architectures where data or 'information' travels in one direction from input nodes, through hidden layer(s), and out from the output nodes, with no cycles or loops. Feed Forward networks can solve complex problems by learning representations from input data without any need for manual feature extraction. They often use a form of supervised learning called Backpropagation for training and are commonly applied in applications such as image recognition and speech recognition.

The concept of Feed Forward neural networks dates back to the earliest models of artificial neural networks in the 1950s and 1960s. The term became more distinct with the rise of deep learning in the late 2000s and early 2010s, as deep Feed Forward networks, or multilayer perceptrons, became a crucial tool for handling complex tasks.

While many scientists have contributed to the development of Feed Forward networks, notable figures include Frank Rosenblatt, who created the Perceptron, a type of Feed Forward network, in the late 1950s, and Geoffrey Hinton, who spearheaded many advancements in deep learning involving Feed Forward structures.

Newsletter