Nicolas Papernot
(3 articles)2004
Adversarial Attacks
Manipulating input data to deceive machine learning models, causing them to make incorrect predictions or classifications.
Generality: 650
2014
Targeted Adversarial Examples
Inputs intentionally designed to cause a machine learning model to misclassify them into a specific, incorrect category.
Generality: 255
2019
Double Descent
Phenomenon in ML where the prediction error on test data initially decreases, increases, and then decreases again as model complexity grows.
Generality: 715