Cherry Picking

Cherry Picking

Practice of selectively choosing the most favorable results from multiple outputs generated by an algorithm, often used to present the algorithm in a better light.

Cherry picking in the context of AI involves selectively presenting results that are most advantageous or desirable from a set of possibilities produced by an AI system, while disregarding less favorable outcomes. This practice can lead to a biased representation of an AI's capabilities, as it does not accurately reflect the average or typical performance of the algorithm. In research and development, this behavior can compromise the integrity of evaluations, as it masks potential flaws or limitations of AI models. In practical applications, such as in product demonstrations or AI benchmarks, cherry picking can mislead stakeholders about the system's true effectiveness, impacting decision-making and policy-setting.

While the term "cherry picking" is not unique to AI and has been used in various fields to describe selective data presentation, its application to AI became more prominent with the rise of machine learning and big data analytics in the early 21st century. As AI systems began producing varied results across different runs, the need to address the implications of selective result presentation gained attention.

No specific individuals are credited with the concept of cherry picking in AI, as it is a broader ethical and methodological concern discussed among the AI community. Organizations such as the Association for Computing Machinery (ACM) and conferences like NeurIPS have highlighted the importance of avoiding such practices by promoting reproducibility, transparency, and fairness in AI research and applications.

Newsletter