Loading filters...
Responsible AI

Responsible AI

Application of AI in a manner that is transparent, unbiased, and respects user privacy and value.

Responsible AI refers to the development and deployment of artificial intelligence technologies in a manner that is ethical, transparent, accountable, and protects user privacy. It involves strategies that ensure AI systems do not propagate harmful biases, are explainable and understandable, and operate within defined and acceptable societal and ethical boundaries. Importance is placed on the application of AI technologies respecting values such as fairness, and minimizing the negative societal and individual implications that could arise from AI use. This can include, but is not limited to, undue influence over user behavior, unjust discrimination, or breaches of privacy.

The term "Responsible AI" gained traction as AI technology increasingly became embedded in our daily lives in the 2010s. As AI systems have potential for misuse and unintended harmful consequences, the idea of implementing AI responsibly was conceptualized to address these risks. Although first usage is ambiguous, Google's AI Principles, published in 2018, solidified its importance in the industry.

Significant contributors to Responsible AI include groups like Google's AI Ethics Board, established in 2014, and Microsoft's AI Ethics Committee. These groups and others like them have helped to popularize the term and embed the principles of Responsible AI into their company's development and deployment practices.

Generality: 0.815