Supervised Learning

ML approach where models are trained on labeled data to predict outcomes or classify data into categories.
 

Supervised Learning is foundational to many applications of artificial intelligence, where the goal is to learn a mapping from inputs to outputs based on example input-output pairs. It involves training a model on a dataset that contains both the input features and the corresponding target outputs. The model learns to make predictions or decisions by generalizing from the training data to unseen situations. This learning paradigm is central to numerous AI tasks, including classification, where the output is a label (e.g., spam or not spam), and regression, where the output is a continuous value (e.g., house prices). Supervised learning algorithms optimize their parameters to minimize the difference between the predicted output and the actual output in the training data, typically using methods like gradient descent.

Historical overview: The concept of supervised learning has been around since the inception of neural networks in the 1950s, but it gained significant momentum with the advent of backpropagation in the 1980s, which made training deep neural networks feasible and effective.

Key contributors: While many researchers have contributed to the development of supervised learning, Frank Rosenblatt, who invented the perceptron in 1957, is often credited with laying the groundwork for this field. Later, the backpropagation algorithm, which is fundamental to training deep neural networks, was popularized by David Rumelhart, Geoffrey Hinton, and Ronald Williams in the 1980s, marking a pivotal moment in the advancement of supervised learning techniques.