During today’s Finance 4335 class meeting, I introduced the concept of *statistical independence*. On Thursday, much of our class discussion will focus on the implications of statistical independence for probability distributions such as the binomial and normal distributions which we will rely upon throughout the semester.

Whenever risks are statistically independent of each other, this implies that they are uncorrelated; i.e., random variations in one variable are not meaningfully related to random variations in another. For example, auto accident risks are largely uncorrelated random variables; just because I happen to get into a car accident, this does not make it any more likely that you will suffer a similar fate (that is unless we happen to run into each other!). Another example of statistical independence is a sequence of coin tosses. Just because a coin toss comes up “heads,” this does not make it any more likely that subsequent coin tosses will also come up “heads.”

Computationally, the joint probability that we both get into car accidents or heads comes up on two consecutive tosses of a coin is equal to the product of the two event probabilities. Suppose your probability of getting into an auto accident during the coming year is 1%, whereas my probability is 2%. Then the likelihood that we both get into auto accidents during the coming year is .01 x .02 = .0002, or .02% (1/50th of 1 percent). Similarly, when tossing a “fair” coin, the probability of observing two “heads” in a row is .5 x .5 = 25%. The probability rule which emerges from these examples can be generalized as follows:

Suppose X

and X_{i}_{j}are uncorrelated random variables with probabilities p_{i}and p_{j}respectively. Then thejointprobability that both X_{i}andXoccur is equal to p_{j }_{i}p_{j}.