Covariance
How much two random variables change together
In statistical terms, covariance is a measure that indicates the extent to which two variables change in tandem. If the greater values of one variable correspond to the greater values of the other variable, the covariance is positive. Conversely, if the greater values of one variable correspond to the lesser values of the other variable, the covariance is negative. Covariance is crucial in various fields such as finance (for portfolio optimization), signal processing, and machine learning (especially in algorithms like PCA—Principal Component Analysis), as it helps in understanding the relationship and dependency between variables.
The concept of covariance has been around since the development of probability theory in the early 20th century, but it became more formally defined and widely used in statistical methods and applications around the 1930s, particularly with the advancement of econometrics and the work of statisticians like Ronald Fisher.
Karl Pearson, a pioneering statistician, played a significant role in the development of correlation and regression, which are closely related to covariance. Ronald Fisher also contributed significantly to the field of statistics, including methods involving covariance.