Michael I. Jordan

(22 articles)
Parameterized
1936

Parameterized

Model or function in AI that utilizes parameters to make predictions or decisions.

Generality: 796

Loss Optimization
1936

Loss Optimization

Process of adjusting a model's parameters to minimize the difference between the predicted outputs and the actual outputs, measured by a loss function.

Generality: 886

Sampling Algorithm
1936

Sampling Algorithm

Method used to select a subset of data from a larger set, ensuring that the sample is representative of the original population for the purpose of analysis or computational efficiency.

Generality: 802

Supervised Learning
1959

Supervised Learning

ML approach where models are trained on labeled data to predict outcomes or classify data into categories.

Generality: 882

Linear Separability
1960

Linear Separability

The ability of a dataset to be perfectly separated into two classes using a straight line in two dimensions or a hyperplane in higher dimensions.

Generality: 500

Inference
1965

Inference

Process by which a trained neural network applies learned patterns to new, unseen data to make predictions or decisions.

Generality: 861

Curse of Dimensionality
1970

Curse of Dimensionality

Phenomenon where the complexity and computational cost of analyzing data increase exponentially with the number of dimensions or features.

Generality: 827

Learnability
1980

Learnability

Capacity of an algorithm or model to effectively learn from data, often measured by how well it can generalize from training data to unseen data.

Generality: 847

GMM (Gaussian Mixture Models)
1981

GMM
Gaussian Mixture Models

Probabilistic models that assume all data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters.

Generality: 675

MLP (Multilayer Perceptron)
1986

MLP
Multilayer Perceptron

Type of artificial neural network comprised of multiple layers of neurons, with each layer fully connected to the next, commonly used for tasks involving classification and regression.

Generality: 775

Backpropagation
1986

Backpropagation

Algorithm used for training artificial neural networks, crucial for optimizing the weights to minimize error between predicted and actual outcomes.

Generality: 890

Subsymbolic AI
1986

Subsymbolic AI

AI approaches that do not use explicit symbolic representation of knowledge but instead rely on distributed, often neural network-based methods to process and learn from data.

Generality: 900

Node
1986

Node

A fundamental unit within a neural network or graph that processes inputs to produce outputs, often reflecting the biological concept of neurons.

Generality: 500

Similarity Computation
1990

Similarity Computation

A mathematical process to quantify the likeness between data objects, often used in AI to enhance pattern recognition and data clustering.

Generality: 675

MoE (Mixture of Experts)
1991

MoE
Mixture of Experts

ML architecture that utilizes multiple specialist models (experts) to handle different parts of the input space, coordinated by a gating mechanism that decides which expert to use for each input.

Generality: 705

Continuous Learning
1995

Continuous Learning

Systems and models that learn incrementally from a stream of data, updating their knowledge without forgetting previous information.

Generality: 870

Early Stopping
1996

Early Stopping

A regularization technique used to prevent overfitting in ML models by halting training when performance on a validation set begins to degrade.

Generality: 675

Classifier
2001

Classifier

ML model that categorizes data into predefined classes.

Generality: 861

LDA (Latent Dirichlet Allocation)
2003

LDA
Latent Dirichlet Allocation

Generative statistical model often used in natural language processing to discover hidden (or latent) topics within a collection of documents.

Generality: 794

Recognition Model
2014

Recognition Model

Element of AI that identifies patterns and features in data through learning processes.

Generality: 790

Convergence
2014

Convergence

The point at which an algorithm or learning process stabilizes, reaching a state where further iterations or data input do not significantly alter its outcome.

Generality: 845

Model-Based Classifier
2015

Model-Based Classifier

ML algorithm that uses a pre-defined statistical model to make predictions based on input data.

Generality: 835