Nick Bostrom

(16 articles)
Singularity
1958

Singularity

Hypothetical future point at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.

Generality: 800

Intelligence Explosion
1965

Intelligence Explosion

Hypothetical scenario where an AI system rapidly improves its own capabilities and intelligence, leading to a superintelligent AI far surpassing human intelligence.

Generality: 575

Recursive Self-Improvement
1965

Recursive Self-Improvement

Process by which an AI system iteratively improves itself, enhancing its intelligence and capabilities without human intervention.

Generality: 790

Gorilla Program
1996

Gorilla Program

Concept in AI that illustrates the potential risk of superintelligent machines surpassing human control.

Generality: 670

Fast Takeoff
1998

Fast Takeoff

Rapid transition from human-level to superintelligent AI, occurring in a very short period of time.

Generality: 504

Superintelligence
1998

Superintelligence

A form of AI that surpasses the cognitive performance of humans in virtually all domains of interest, including creativity, general wisdom, and problem-solving.

Generality: 850

Control Problem
2000

Control Problem

Challenge of ensuring that highly advanced AI systems act in alignment with human values and intentions.

Generality: 845

AI Safety
2000

AI Safety

Field of research aimed at ensuring AI technologies are beneficial and do not pose harm to humanity.

Generality: 870

Paperclip Maximizer
2003

Paperclip Maximizer

Theoretical AI designed to maximize the production of paperclips, illustrating the potential dangers of an AI system pursuing a goal without proper constraints.

Generality: 325

WBE (Whole Brain Emulation)
2008

WBE
Whole Brain Emulation

Hypothetical process of scanning a biological brain in detail and replicating its state and processes in a computational system to achieve functional and experiential equivalence.

Generality: 540

Catastrophic Risk
2010

Catastrophic Risk

The potential for AI systems to cause large-scale harm or failure due to unforeseen vulnerabilities, operational errors, or misuse.

Generality: 775

Instrumental Convergence
2013

Instrumental Convergence

Suggests that diverse intelligent agents will likely pursue common sub-goals, such as self-preservation and resource acquisition, to achieve their primary objectives.

Generality: 635

AI Failure Modes
2016

AI Failure Modes

Diverse scenarios where AI systems do not perform as expected or generate unintended consequences.

Generality: 714

Alignment
2016

Alignment

Process of ensuring that an AI system's goals and behaviors are consistent with human values and ethics.

Generality: 790

AI Governance
2016

AI Governance

Set of policies, principles, and practices that guide the ethical development, deployment, and regulation of artificial intelligence technologies.

Generality: 860

P(Doom)
2022

P
Doom

Probability of an existential catastrophe, often discussed within the context of AI safety and risk assessment.

Generality: 550