Minimax Loss
A strategy used in optimization and decision-making problems to minimize the maximum possible loss.
Minimax loss is a concept often applied in the context of adversarial machine learning, where the aim is to devise robust models that can perform consistently well under the worst-case scenarios. This approach focuses on minimizing the maximum loss that an adversary can inflict, essentially optimizing the model to be as robust as possible against the most challenging inputs. It draws from game theory, where it is utilized to ensure the best possible outcome under the assumption that an adversary is trying to maximize the loss. In the space of AI, particularly with adversarial attacks on models like neural networks, minimax loss provides a critical framework for designing algorithms that can withstand and remain effective when faced with malicious inputs or perturbations.
The term "minimax" traces back to the 1940s with origins in game theory, but its application to ML and related fields gained traction in the late 1990s and early 2000s with the rise of adversarial considerations in AI.
Key figures in advancing the idea of minimax loss include game theorist John von Neumann, who first introduced the concept of minimax theorem in decision theory, and adversarial ML researchers like Ian Goodfellow, whose work on adversarial networks has utilized similar principles.