Bias
Systematic errors in data or algorithms that create unfair outcomes, such as privileging one arbitrary group of users over others.
Bias in AI can manifest in various forms, including data bias, algorithmic bias, and societal bias. Data bias occurs when the dataset used to train an AI model does not accurately represent the target population, leading to skewed or unfair outcomes. Algorithmic bias arises when the algorithms used in AI systems inadvertently amplify existing societal prejudices, often due to underlying assumptions made during the algorithm's development. Societal bias reflects broader societal inequalities that can be perpetuated through AI systems. The presence of bias in AI is significant because it can lead to discrimination in critical areas such as recruitment, lending, and law enforcement, making fairness, accountability, and transparency in AI crucial areas of concern.
The term "bias" has been used in a statistical and machine learning context since at least the 1950s, but its application and discussion in the context of AI fairness and ethics gained prominence in the 2010s as AI systems became more widespread in decision-making roles.
Key contributors to the study and mitigation of bias in AI include researchers and organizations focused on AI ethics, such as Joy Buolamwini, Timnit Gebru, and the Algorithmic Justice League, which work to highlight and reduce bias in AI systems.