Statistical Independence

During yesterday’s Finance 4335 class meeting, I introduced the concept of statistical independence. During tomorrow’s class meeting, much of our class discussion will focus on the implications of statistical independence for probability distributions such as the binomial and normal distributions which we will rely upon throughout the semester.

Whenever risks are statistically independent of each other, this implies that they are uncorrelated; i.e., random variations in one variable are not meaningfully related to random variations in another. For example, auto accident risks are largely uncorrelated random variables; just because I happen to get into a car accident, this does not make it any more likely that you will suffer a similar fate (that is unless we happen to run into each other!). Another example of statistical independence is a sequence of coin tosses. Just because a coin toss comes up “heads,” this does not make it any more likely that subsequent coin tosses will also come up “heads.”

Computationally, the joint probability that we both get into car accidents or heads comes up on two consecutive tosses of a coin is equal to the product of the two event probabilities. Suppose your probability of getting into an auto accident during 2017 is 1%, whereas my probability is 2%. Then the likelihood that we both get into auto accidents during 2017 is .01 x .02 = .0002, or .02% (1/50th of 1 percent). Similarly, when tossing a “fair” coin, the probability of observing two “heads” in a row is .5 x .5 = 25%. The probability rule which emerges from these examples can be generalized as follows:

Suppose Xi and Xj are uncorrelated random variables with probabilities pi and pj respectively. Then the joint probability that both Xi and Xj occur is equal to pipj.

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *