Home

SPSX

What is Chebyshev’s Inequality?

Posted by Muhammad Taheir | On: , |

Chebyshev’s Inequality

Chebyshev’s Inequality is a formula in probability theory that relates to the distribution of numbers in a set. The formula was originally developed by Chebyschev’s friend, Irénée-Jules Bienaymé. In layman’s terms, the formula helps determine the number of values that reside in and outside the standard deviation. The standard deviation is a statistically determined number that tells how far away values tend to be from the average of the set. Roughly two-thirds of the values should fall within one standard deviation up or down from the mean.

Chebyshev’s Inequality Definition




Pafnuty Chebyshev
Pafnuty Chebyshev
Chebyschev’s Inequality formula is able to prove with little provided information the probability of outliers existing at a certain interval. Given X is a random variable, A stands for the mean of the set, K is the number of standard deviations, and Y is the value of the standard deviation, the formula reads as follows: Pr(|X-A|=>KY)<=1/K2, The absolute value of the difference of X minus A is greater than or equal to the K times Y has the probability of less than or equal to one divided by K squared. You can learn how to calculate Chebyshev’s Inequality here.
Chebyshev’s Theorem Uses
The formula was used with calculus to develop the weak version of the law of large numbers. This law states that as a sample set increases in size, the closer it should be to its theoretical mean. A common example is that when rolling a six-sided die, the probable average is 3.5. A sample size of 5 rolls may result in drastically different results. If you roll the die 20 times, the average should begin approaching 3.5. As you add more and more rolls, the average should continue to near 3.5 until reaching it or becoming so close to 3.5 that they are essentially equal.

Another application is in finding the probable difference between the mean and median of a set of numbers. Using a one-sided version of Chebyshev’s Inequality theorem, also known as Cantelli’s theorem, you can prove that the absolute value of the difference between the median and the mean will always be less than or equal to the standard deviation. This is handy in determining if a median you derived is statistically possible and plausibly correct.