Loading filters...
P(Doom)

P(Doom)

Probability of an existential catastrophe, often discussed within the context of AI safety and risk assessment.

P(doom) is a theoretical construct used primarily in discussions about global risks, particularly those associated with advanced artificial intelligence. It quantifies the probability of an existential catastrophe, which could either severely curtail Earth-originating intelligent life or drastically curtail its potential. In AI, this term is crucial for researchers focusing on AI alignment and safety, as it frames the importance of developing AI technologies that do not inadvertently cause harm on a global scale. This metric encourages rigorous safety protocols and ethical considerations in AI development, aiming to minimize the probability of catastrophic outcomes as AI systems become more capable.

The term "P(doom)" became more commonly used in academic and tech circles discussing AI risks in the early 21st century. Although the concept of existential risk has been around since the mid-20th century, discussions specifically quantifying these risks in the context of AI safety began gaining traction in the 2010s, particularly among researchers affiliated with institutions like the Future of Humanity Institute and the Machine Intelligence Research Institute.

While the specific term "P(doom)" may not be attributed to a single individual, key figures in the discourse on existential risks and AI safety include Nick Bostrom and Eliezer Yudkowsky. Their work, along with that of other researchers in AI ethics and safety, has been instrumental in advancing understanding and public awareness of the potential risks posed by unaligned or uncontrolled artificial intelligence.

Generality: 0.55