Eliezer Yudkowsky
(15 articles)
Singularity
Hypothetical future point at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.
Generality: 800

Intelligence Explosion
Hypothetical scenario where an AI system rapidly improves its own capabilities and intelligence, leading to a superintelligent AI far surpassing human intelligence.
Generality: 575

Recursive Self-Improvement
Process by which an AI system iteratively improves itself, enhancing its intelligence and capabilities without human intervention.
Generality: 790

Gorilla Program
Concept in AI that illustrates the potential risk of superintelligent machines surpassing human control.
Generality: 670

Fast Takeoff
Rapid transition from human-level to superintelligent AI, occurring in a very short period of time.
Generality: 504

Superintelligence
A form of AI that surpasses the cognitive performance of humans in virtually all domains of interest, including creativity, general wisdom, and problem-solving.
Generality: 850

Control Problem
Challenge of ensuring that highly advanced AI systems act in alignment with human values and intentions.
Generality: 845

AI Safety
Field of research aimed at ensuring AI technologies are beneficial and do not pose harm to humanity.
Generality: 870

Roko's Basilisk
Thought experiment proposing that a future all-powerful AI could punish those who did not help bring about its existence.
Generality: 155

Catastrophic Risk
The potential for AI systems to cause large-scale harm or failure due to unforeseen vulnerabilities, operational errors, or misuse.
Generality: 775

Instrumental Convergence
Suggests that diverse intelligent agents will likely pursue common sub-goals, such as self-preservation and resource acquisition, to achieve their primary objectives.
Generality: 635

AI Failure Modes
Diverse scenarios where AI systems do not perform as expected or generate unintended consequences.
Generality: 714

Alignment
Process of ensuring that an AI system's goals and behaviors are consistent with human values and ethics.
Generality: 790

AI Governance
Set of policies, principles, and practices that guide the ethical development, deployment, and regulation of artificial intelligence technologies.
Generality: 860

PDoom
Probability of an existential catastrophe, often discussed within the context of AI safety and risk assessment.
Generality: 550