Conditional Probability
Measures the likelihood of an event occurring, given that another event has already occurred.
Conditional probability is a fundamental concept in probability theory and statistics that quantifies the probability of an event A given the occurrence of another event B. Denoted as P(A|B), it is calculated by the formula P(A|B) = P(A ∩ B) / P(B), provided that P(B) > 0. This concept is crucial for understanding the dependence between events and for updating the probability of events as new information becomes available. It forms the basis for many statistical methods and algorithms, including Bayesian inference, which uses prior knowledge along with new evidence to make statistical decisions.
The formal concept of conditional probability was developed in the 18th century, but became significantly popular and more rigorously defined with the advent of modern probability theory in the early 20th century. The notation P(A|B) was introduced by the English statistician Ronald A. Fisher in 1921.
The development of conditional probability as a formal mathematical concept is attributed to early probability theorists such as Thomas Bayes and Pierre-Simon Laplace. Ronald A. Fisher further contributed to its formal notation and theoretical foundations in the context of statistical inference.