Criteria Drift
Phenomenon where the criteria used to evaluate a ML model change over time, leading to a potential decline in the model's performance.
In machine learning, criteria drift occurs when the metrics, standards, or conditions used to assess a model's accuracy and effectiveness evolve. This can happen due to changes in the data distribution, shifts in the underlying patterns of the data, or adjustments in business objectives and goals. As a result, a model that was once performing well might become less effective because it was trained on data that no longer represents the current environment or it is being evaluated against new criteria that were not considered during the model's development. Addressing criteria drift requires ongoing model monitoring, updating the model with new data, and potentially redefining the evaluation metrics to ensure continued relevance and accuracy.
The concept of criteria drift has been recognized since the early developments of machine learning and statistical modeling, gaining more attention in the 2010s as machine learning applications became more widespread and complex, especially with the advent of big data and dynamic environments.
While criteria drift is a recognized phenomenon rather than a concept attributed to specific individuals, the broader discussions around model monitoring and maintenance have been significantly advanced by researchers and practitioners in the field of machine learning operations (MLOps) and data science, including contributions from prominent organizations like Google, Amazon, and academic institutions focused on AI research.