Michael I. Jordan
(22 articles)
Parameterized
Model or function in AI that utilizes parameters to make predictions or decisions.
Generality: 796

Loss Optimization
Process of adjusting a model's parameters to minimize the difference between the predicted outputs and the actual outputs, measured by a loss function.
Generality: 886

Sampling Algorithm
Method used to select a subset of data from a larger set, ensuring that the sample is representative of the original population for the purpose of analysis or computational efficiency.
Generality: 802

Supervised Learning
ML approach where models are trained on labeled data to predict outcomes or classify data into categories.
Generality: 882

Linear Separability
The ability of a dataset to be perfectly separated into two classes using a straight line in two dimensions or a hyperplane in higher dimensions.
Generality: 500

Inference
Process by which a trained neural network applies learned patterns to new, unseen data to make predictions or decisions.
Generality: 861

Curse of Dimensionality
Phenomenon where the complexity and computational cost of analyzing data increase exponentially with the number of dimensions or features.
Generality: 827

Learnability
Capacity of an algorithm or model to effectively learn from data, often measured by how well it can generalize from training data to unseen data.
Generality: 847

GMM
Gaussian Mixture Models
Gaussian Mixture Models
Probabilistic models that assume all data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters.
Generality: 675

MLP
Multilayer Perceptron
Multilayer Perceptron
Type of artificial neural network comprised of multiple layers of neurons, with each layer fully connected to the next, commonly used for tasks involving classification and regression.
Generality: 775

Backpropagation
Algorithm used for training artificial neural networks, crucial for optimizing the weights to minimize error between predicted and actual outcomes.
Generality: 890

Subsymbolic AI
AI approaches that do not use explicit symbolic representation of knowledge but instead rely on distributed, often neural network-based methods to process and learn from data.
Generality: 900

Node
A fundamental unit within a neural network or graph that processes inputs to produce outputs, often reflecting the biological concept of neurons.
Generality: 500

Similarity Computation
A mathematical process to quantify the likeness between data objects, often used in AI to enhance pattern recognition and data clustering.
Generality: 675

MoE
Mixture of Experts
Mixture of Experts
ML architecture that utilizes multiple specialist models (experts) to handle different parts of the input space, coordinated by a gating mechanism that decides which expert to use for each input.
Generality: 705

Continuous Learning
Systems and models that learn incrementally from a stream of data, updating their knowledge without forgetting previous information.
Generality: 870

Early Stopping
A regularization technique used to prevent overfitting in ML models by halting training when performance on a validation set begins to degrade.
Generality: 675

Classifier
ML model that categorizes data into predefined classes.
Generality: 861

LDA
Latent Dirichlet Allocation
Latent Dirichlet Allocation
Generative statistical model often used in natural language processing to discover hidden (or latent) topics within a collection of documents.
Generality: 794

Recognition Model
Element of AI that identifies patterns and features in data through learning processes.
Generality: 790

Convergence
The point at which an algorithm or learning process stabilizes, reaching a state where further iterations or data input do not significantly alter its outcome.
Generality: 845

Model-Based Classifier
ML algorithm that uses a pre-defined statistical model to make predictions based on input data.
Generality: 835