Distributions of random variables

De Baripedia
La version imprimable n’est plus prise en charge et peut comporter des erreurs de génération. Veuillez mettre à jour les signets de votre navigateur et utiliser à la place la fonction d’impression par défaut de celui-ci.

Distributions of random variables

Normal distribution

Among all the distributions we see in practice, one is overwhelmingly the most common. The symmetric, unimodal, bell curve is ubiquitous throughout statistics. Indeed it is so common, that people often know it as the or ,[1] shown in Figure [simpleNormal]. Variables such as SAT scores and heights of US adult males closely follow the normal distribution.

Many variables are nearly normal, but none are exactly normal. Thus the normal distribution, while not perfect for any single problem, is very useful for a variety of problems. We will use it in data exploration and to solve important problems in statistics.

Normal distribution model

The normal distribution model always describes a symmetric, unimodal, bell-shaped curve. However, these curves can look different depending on the details of the model. Specifically, the normal distribution model can be adjusted using two parameters: mean and standard deviation. As you can probably guess, changing the mean shifts the bell curve to the left or right, while changing the standard deviation stretches or constricts the curve. Figure [twoSampleNormals] shows the normal distribution with mean and standard deviation in the left panel and the normal distributions with mean and standard deviation in the right panel. Figure [twoSampleNormalsStacked] shows these distributions on the same axis.

Fichier:Ch distributions/figures/twoSampleNormals/twoSampleNormals
caption Both curves represent the normal distribution, however, they differ in their center and spread. The normal distribution with mean 0 and standard deviation 1 is called the .
Fichier:Ch distributions/figures/twoSampleNormalsStacked/twoSampleNormalsStacked
caption The normal models shown in Figure [twoSampleNormals] but plotted together and on the same scale.

If a normal distribution has mean and standard deviation , we may write the distribution as . The two distributions in Figure [twoSampleNormalsStacked] can be written as Because the mean and standard deviation describe a normal distribution exactly, they are called the distribution’s .

Write down the short-hand for a normal distribution with[2]

mean 5 and standard deviation 3,

mean -100 and standard deviation 10, and

mean 2 and standard deviation 9.

Standardizing with Z-scores

Table  shows the mean and standard deviation for total scores on the SAT and ACT. The distribution of SAT and ACT scores are both nearly normal. Suppose Ann scored 1800 on her SAT and Tom scored 24 on his ACT. Who performed better?[actSAT] We use the standard deviation as a guide. Ann is 1 standard deviation above average on the SAT: . Tom is 0.6 standard deviations above the mean on the ACT: . In Figure [satActNormals], we can see that Ann tends to do better with respect to everyone else than Tom did, so her score was better.

l r r & SAT & ACT
Mean

& 1500 & 21
SD & 300 & 5


Fichier:Ch distributions/figures/satActNormals/satActNormals
caption Ann’s and Tom’s scores shown with the distributions of SAT and ACT scores.

Example [actSAT] used a standardization technique called a Z-score, a method most commonly employed for nearly normal observations but that may be used with any distribution. The of an observation is defined as the number of standard deviations it falls above or below the mean. If the observation is one standard deviation above the mean, its Z-score is 1. If it is 1.5 standard deviations below the mean, then its Z-score is -1.5. If is an observation from a distribution , we define the Z-score mathematically as Using , , and , we find Ann’s Z-score:

The Z-score of an observation is the number of standard deviations it falls above or below the mean. We compute the Z-score for an observation that follows a distribution with mean and standard deviation using

Use Tom’s ACT score, 24, along with the ACT mean and standard deviation to compute his Z-score.[3]

Observations above the mean always have positive Z-scores while those below the mean have negative Z-scores. If an observation is equal to the mean (e.g. SAT score of 1500), then the Z-score is .

Let represent a random variable from , and suppose we observe . (a) Find the Z-score of . (b) Use the Z-score to determine how many standard deviations above or below the mean falls.[4]

[headLZScore] Head lengths of brushtail possums follow a nearly normal distribution with mean 92.6 mm and standard deviation 3.6 mm. Compute the Z-scores for possums with head lengths of 95.4 mm and 85.8 mm.[5]

We can use Z-scores to roughly identify which observations are more unusual than others. One observation is said to be more unusual than another observation if the absolute value of its Z-score is larger than the absolute value of the other observation’s Z-score: . This technique is especially insightful when a distribution is symmetric.

Which of the observations in Guided Practice [headLZScore] is more unusual?[6]

Normal probability table

Ann from Example [actSAT] earned a score of 1800 on her SAT with a corresponding . She would like to know what percentile she falls in among all SAT test-takers. Ann’s is the percentage of people who earned a lower SAT score than Ann. We shade the area representing those individuals in Figure [satBelow1800]. The total area under the normal curve is always equal to 1, and the proportion of people who scored below Ann on the SAT is equal to the area shaded in Figure [satBelow1800]: 0.8413. In other words, Ann is in the percentile of SAT takers.

Fichier:Ch distributions/figures/satBelow1800/satBelow1800
caption The normal model for SAT scores, shading the area of those individuals who scored below Ann.

We can use the normal model to find percentiles. A , which lists Z-scores and corresponding percentiles, can be used to identify a percentile based on the Z-score (and vice versa). Statistical software can also be used.

A normal probability table is given in Appendix  and abbreviated in Table [zTableShort]. We use this table to identify the percentile corresponding to any particular Z-score. For instance, the percentile of is shown in row and column in Table [zTableShort]: 0.6664, or the percentile. Generally, we round to two decimals, identify the proper row in the normal probability table up through the first decimal, and then determine the column representing the second decimal value. The intersection of this row and column is the percentile of the observation.

Fichier:Ch distributions/figures/normalTails/normalTails
caption The area to the left of represents the percentile of the observation.
A section of the normal probability table. The percentile for a normal random variable with has been , and the percentile closest to 0.8000 has also been .
0.00 0.01 0.02 0.05 0.06 0.07 0.08 0.09
0.0
0.1
0.2
0.3
0.5
0.6
0.7
0.9
1.0
1.1

We can also find the Z-score associated with a percentile. For example, to identify Z for the percentile, we look for the value closest to 0.8000 in the middle portion of the table: 0.7995. We determine the Z-score for the percentile by combining the row and column Z values: 0.84.

Determine the proportion of SAT test takers who scored better than Ann on the SAT.[7]

Normal probability examples

Cumulative SAT scores are approximated well by a normal model, .

Shannon is a randomly selected SAT taker, and nothing is known about Shannon’s SAT aptitude. What is the probability Shannon scores at least 1630 on her SATs?[satAbove1630Exam] First, always draw and label a picture of the normal distribution. (Drawings need not be exact to be useful.) We are interested in the chance she scores above 1630, so we shade this upper tail:

image

The picture shows the mean and the values at 2 standard deviations above and below the mean. The simplest way to find the shaded area under the curve makes use of the Z-score of the cutoff value. With , , and the cutoff value , the Z-score is computed as We look up the percentile of in the normal probability table shown in Table [zTableShort] or in Appendix , which yields 0.6664. However, the percentile describes those who had a Z-score lower than 0.43. To find the area above , we compute one minus the area of the lower tail:

image

The probability Shannon scores at least 1630 on the SAT is 0.3336.

For any normal probability situation, always always always draw and label the normal curve and shade the area of interest first. The picture will provide an estimate of the probability.

After drawing a figure to represent the situation, identify the Z-score for the observation of interest.

If the probability of Shannon scoring at least 1630 is 0.3336, then what is the probability she scores less than 1630? Draw the normal curve representing this exercise, shading the lower region instead of the upper one.[8]

Edward earned a 1400 on his SAT. What is his percentile? [edwardSatBelow1400] First, a picture is needed. Edward’s percentile is the proportion of people who do not get as high as a 1400. These are the scores to the left of 1400.

image

Identifying the mean , the standard deviation , and the cutoff for the tail area makes it easy to compute the Z-score: Using the normal probability table, identify the row of and column of , which corresponds to the probability . Edward is at the percentile.

Use the results of Example [edwardSatBelow1400] to compute the proportion of SAT takers who did better than Edward. Also draw a new picture.[9]

The normal probability table in most books gives the area to the left. If you would like the area to the right, first find the area to the left and then subtract this amount from one.

Stuart earned an SAT score of 2100. Draw a picture for each part. (a) What is his percentile? (b) What percent of SAT takers did better than Stuart?[10]

Based on a sample of 100 men,[11] the heights of male adults between the ages 20 and 62 in the US is nearly normal with mean 70.0” and standard deviation 3.3”.

Mike is 5’7” and Jim is 6’4”. (a) What is Mike’s height percentile? (b) What is Jim’s height percentile? Also draw one picture for each part.[12]

The last several problems have focused on finding the probability or percentile for a particular observation. What if you would like to know the observation corresponding to a particular percentile?

Erik’s height is at the percentile. How tall is he?[normalExam40Perc] As always, first draw the picture.

image

In this case, the lower tail probability is known (0.40), which can be shaded on the diagram. We want to find the observation that corresponds to this value. As a first step in this direction, we determine the Z-score associated with the percentile.

Because the percentile is below 50%, we know will be negative. Looking in the negative part of the normal probability table, we search for the probability inside the table closest to 0.4000. We find that 0.4000 falls in row and between columns and . Since it falls closer to , we take this one: .

Knowing and the population parameters and inches, the Z-score formula can be set up to determine Erik’s unknown height, labeled : Solving for yields the height 69.18 inches. That is, Erik is about 5’9” (this is notation for 5-feet, 9-inches).

What is the adult male height at the percentile? Again, we draw the figure first.

image

Next, we want to find the Z-score at the percentile, which will be a positive value. Looking in the Z-table, we find falls in row and the nearest column is , i.e. . Finally, the height is found using the Z-score formula with the known mean , standard deviation , and Z-score : This yields 73.04 inches or about 6’1” as the height at the percentile.

(a) What is the percentile for SAT scores? (b) What is the percentile of the male heights? As always with normal probability problems, first draw a picture.[13]

[more74Less69] (a) What is the probability that a randomly selected male adult is at least 6’2” (74 inches)? (b) What is the probability that a male adult is shorter than 5’9” (69 inches)?[14]

What is the probability that a random adult male is between 5’9” and 6’2”? These heights correspond to 69 inches and 74 inches. First, draw the figure. The area of interest is no longer an upper or lower tail.

image

The total area under the curve is 1. If we find the area of the two tails that are not shaded (from Guided Practice [more74Less69], these areas are and ), then we can find the middle area:

image

That is, the probability of being between 5’9” and 6’2” is 0.5048.

What percent of SAT takers get between 1500 and 2000?[15]

What percent of adult males are between 5’5” and 5’7”?[16]

68-95-99.7 rule

Here, we present a useful rule of thumb for the probability of falling within 1, 2, and 3 standard deviations of the mean in the normal distribution. This will be useful in a wide range of practical settings, especially when trying to make a quick estimate without a calculator or Z-table.

Fichier:Ch distributions/figures/6895997/6895997
caption Probabilities for falling within 1, 2, and 3 standard deviations of the mean in a normal distribution.

Use the Z-table to confirm that about 68%, 95%, and 99.7% of observations fall within 1, 2, and 3, standard deviations of the mean in the normal distribution, respectively. For instance, first find the area that falls between and , which should have an area of about 0.68. Similarly there should be an area of about 0.95 between and .[17]

It is possible for a normal random variable to fall 4, 5, or even more standard deviations from the mean. However, these occurrences are very rare if the data are nearly normal. The probability of being further than 4 standard deviations from the mean is about 1-in-15,000. For 5 and 6 standard deviations, it is about 1-in-2 million and 1-in-500 million, respectively.

SAT scores closely follow the normal model with mean and standard deviation . (a) About what percent of test takers score 900 to 2100? (b) What percent score between 1500 and 2100?[18]

Evaluating the normal approximation

Many processes can be well approximated by the normal distribution. We have already seen two good examples: SAT scores and the heights of US adult males. While using a normal model can be extremely convenient and helpful, it is important to remember normality is always an approximation. Testing the appropriateness of the normal assumption is a key step in many data analyses.

Example [normalExam40Perc] suggests the distribution of heights of US males is well approximated by the normal model. We are interested in proceeding under the assumption that the data are normally distributed, but first we must check to see if this is reasonable.

There are two visual methods for checking the assumption of normality, which can be implemented and interpreted quickly. The first is a simple histogram with the best fitting normal curve overlaid on the plot, as shown in the left panel of Figure [fcidMHeights]. The sample mean and standard deviation are used as the parameters of the best fitting normal curve. The closer this curve fits the histogram, the more reasonable the normal model assumption. Another more common method is examining a ,[19] shown in the right panel of Figure [fcidMHeights]. The closer the points are to a perfect straight line, the more confident we can be that the data follow the normal model.

Fichier:Ch distributions/figures/fcidMHeights/fcidMHeights
caption A sample of 100 male heights. The observations are rounded to the nearest whole inch, explaining why the points appear to jump in increments in the normal probability plot.

Three data sets of 40, 100, and 400 samples were simulated from a normal distribution, and the histograms and normal probability plots of the data sets are shown in Figure [normalExamples]. These will provide a benchmark for what to look for in plots of real data. [normalExamplesExample]

Fichier:Ch distributions/figures/normalExamples/normalExamples
caption Histograms and normal probability plots for three simulated normal data sets; (left), (middle), (right).

The left panels show the histogram (top) and normal probability plot (bottom) for the simulated data set with 40 observations. The data set is too small to really see clear structure in the histogram. The normal probability plot also reflects this, where there are some deviations from the line. We should expect deviations of this amount for such a small data set.

The middle panels show diagnostic plots for the data set with 100 simulated observations. The histogram shows more normality and the normal probability plot shows a better fit. While there are a few observations that deviate noticeably from the line, they are not particularly extreme.

The data set with 400 observations has a histogram that greatly resembles the normal distribution, while the normal probability plot is nearly a perfect straight line. Again in the normal probability plot there is one observation (the largest) that deviates slightly from the line. If that observation had deviated 3 times further from the line, it would be of greater importance in a real data set. Apparent outliers can occur in normally distributed data but they are rare.

Notice the histograms look more normal as the sample size increases, and the normal probability plot becomes straighter and more stable.

Are NBA player heights normally distributed? Consider all 435 NBA players from the 2008-9 season presented in Figure [nbaNormal].[20] We first create a histogram and normal probability plot of the NBA player heights. The histogram in the left panel is slightly left skewed, which contrasts with the symmetric normal distribution. The points in the normal probability plot do not appear to closely follow a straight line but show what appears to be a “wave”. We can compare these characteristics to the sample of 400 normally distributed observations in Example [normalExamplesExample] and see that they represent much stronger deviations from the normal model. NBA player heights do not appear to come from a normal distribution.

Fichier:Ch distributions/figures/nbaNormal/nbaNormal
caption Histogram and normal probability plot for the NBA heights from the 2008-9 season.

Can we approximate poker winnings by a normal distribution? We consider the poker winnings of an individual over 50 days. A histogram and normal probability plot of these data are shown in Figure [pokerNormal]. The data are very strongly right skewed in the histogram, which corresponds to the very strong deviations on the upper right component of the normal probability plot. If we compare these results to the sample of 40 normal observations in Example [normalExamplesExample], it is apparent that these data show very strong deviations from the normal model.

Fichier:Ch distributions/figures/pokerNormal/pokerNormal
caption A histogram of poker data with the best fitting normal plot and a normal probability plot.

[normalQuantileExercise] Determine which data sets represented in Figure [normalQuantileExer] plausibly come from a nearly normal distribution. Are you confident in all of your conclusions? There are 100 (top left), 50 (top right), 500 (bottom left), and 15 points (bottom right) in the four plots.[21]

Fichier:Ch distributions/figures/normalQuantileExer/normalQuantileExer
caption Four normal probability plots for Guided Practice [normalQuantileExercise].

[normalQuantileExerciseAdditional] Figure [normalQuantileExerAdditional] shows normal probability plots for two distributions that are skewed. One distribution is skewed to the low end (left skewed) and the other to the high end (right skewed). Which is which?[22]

Fichier:Ch distributions/figures/normalQuantileExer/normalQuantileExerAdditional
caption Normal probability plots for Guided Practice [normalQuantileExerciseAdditional].

Geometric distribution (special topic)

How long should we expect to flip a coin until it turns up ? Or how many times should we expect to roll a die until we get a ? These questions can be answered using the geometric distribution. We first formalize each trial – such as a single coin flip or die toss – using the Bernoulli distribution, and then we combine these with our tools from probability (Chapter [probability]) to construct the geometric distribution.

Bernoulli distribution

Stanley Milgram began a series of experiments in 1963 to estimate what proportion of people would willingly obey an authority and give severe shocks to a stranger. Milgram found that about 65% of people would obey the authority and give such shocks. Over the years, additional research suggested this number is approximately consistent across communities and time.[23]

Each person in Milgram’s experiment can be thought of as a . We label a person a if she refuses to administer the worst shock. A person is labeled a if she administers the worst shock. Because only 35% of individuals refused to administer the most severe shock, we denote the with . The probability of a failure is sometimes denoted with .

Thus, or is recorded for each person in the study. When an individual trial only has two possible outcomes, it is called a .

A Bernoulli random variable has exactly two possible outcomes. We typically label one of these outcomes a “success” and the other outcome a “failure”. We may also denote a success by and a failure by .

We chose to label a person who refuses to administer the worst shock a “success” and all others as “failures”. However, we could just as easily have reversed these labels. The mathematical framework we will build does not depend on which outcome is labeled a success and which a failure, as long as we are consistent.

Bernoulli random variables are often denoted as for a success and for a failure. In addition to being convenient in entering data, it is also mathematically handy. Suppose we observe ten trials:

Then the , , is the sample mean of these observations: Échec de l’analyse (fonction inconnue « \begin{aligned} »): {\displaystyle \begin{aligned} \hat{p} = \frac{\text{\# of successes}}{\text{\# of trials}} = \frac{0+1+1+1+1+0+1+1+0+0}{10} = 0.6\end{aligned}} This mathematical inquiry of Bernoulli random variables can be extended even further. Because and are numerical outcomes, we can define the mean and standard deviation of a Bernoulli random variable.[24]

If is a random variable that takes value 1 with probability of success and 0 with probability , then is a Bernoulli random variable with mean and standard deviation

In general, it is useful to think about a Bernoulli random variable as a random process with only two outcomes: a success or failure. Then we build our mathematical framework using the numerical labels and for successes and failures, respectively.

Geometric distribution

Dr. Smith wants to repeat Milgram’s experiments but she only wants to sample people until she finds someone who will not inflict the worst shock.[25] If the probability a person will not give the most severe shock is still 0.35 and the subjects are independent, what are the chances that she will stop the study after the first person? The second person? The third? What about if it takes her individuals who will administer the worst shock before finding her first success, i.e. the first success is on the person? (If the first success is the fifth person, then we say .) [waitForShocker] The probability of stopping after the first person is just the chance the first person will not administer the worst shock: . The probability it will be the second person is Likewise, the probability it will be the third person is .

If the first success is on the person, then there are failures and finally 1 success, which corresponds to the probability . This is the same as .

Example [waitForShocker] illustrates what is called the geometric distribution, which describes the waiting time until a success for Bernoulli random variables. In this case, the independence aspect just means the individuals in the example don’t affect each other, and identical means they each have the same probability of success.

The geometric distribution from Example [waitForShocker] is shown in Figure [geometricDist35]. In general, the probabilities for a geometric distribution decrease fast.

Fichier:Ch distributions/figures/geometricDist35/geometricDist35
caption The geometric distribution when the probability of success is .

While this text will not derive the formulas for the mean (expected) number of trials needed to find the first success or the standard deviation or variance of this distribution, we present general formulas for each.

If the probability of a success in one trial is and the probability of a failure is , then the probability of finding the first success in the trial is given by

The mean (i.e. expected value), variance, and standard deviation of this wait time are given by

Échec de l’analyse (fonction inconnue « \label »): {\displaystyle \begin{aligned} \mu &= \frac{1}{p} &\sigma^2&=\frac{1-p}{p^2} &\sigma &= \sqrt{\frac{1-p}{p^2}} \label{geomFormulas}\end{aligned}}

It is no accident that we use the symbol for both the mean and expected value. The mean and the expected value are one and the same.

The left side of Equation ([geomFormulas]) says that, on average, it takes trials to get a success. This mathematical result is consistent with what we would expect intuitively. If the probability of a success is high (e.g. 0.8), then we don’t usually wait very long for a success: trials on average. If the probability of a success is low (e.g. 0.1), then we would expect to view many trials before we see a success: trials.

The probability that an individual would refuse to administer the worst shock is said to be about 0.35. If we were to examine individuals until we found one that did not administer the shock, how many people should we expect to check? The first expression in Equation ([geomFormulas]) may be useful.[26]

What is the chance that Dr. Smith will find the first success within the first 4 people? [marglimFirstSuccessIn4] This is the chance it is the first (), second (), third (), or fourth () person as the first success, which are four disjoint outcomes. Because the individuals in the sample are randomly sampled from a large population, they are independent. We compute the probability of each case and add the separate results: There is an 82% chance that she will end the study within 4 people.

Determine a more clever way to solve Example [marglimFirstSuccessIn4]. Show that you get the same result.[27]

Suppose in one region it was found that the proportion of people who would administer the worst shock was “only” 55%. If people were randomly selected from this region, what is the expected number of people who must be checked before one was found that would be deemed a success? What is the standard deviation of this waiting time? [onlyShocking55PercOfTheTimeExample] A success is when someone will not inflict the worst shock, which has probability for this region. The expected number of people to be checked is and the standard deviation is .

Using the results from Example [onlyShocking55PercOfTheTimeExample], and , would it be appropriate to use the normal model to find what proportion of experiments would end in 3 or fewer trials?[28]

The independence assumption is crucial to the geometric distribution’s accurate description of a scenario. Mathematically, we can see that to construct the probability of the success on the trial, we had to use the Multiplication Rule for Independent Processes. It is no simple task to generalize the geometric model for dependent trials.

Binomial distribution (special topic)

Suppose we randomly selected four individuals to participate in the “shock" study. What is the chance exactly one of them will be a success? Let’s call the four people Allen (), Brittany (), Caroline (), and Damian () for convenience. Also, suppose 35% of people are successes as in the previous version of this example.[oneRefuser] Let’s consider a scenario where one person refuses: Échec de l’analyse (fonction inconnue « \begin{aligned} »): {\displaystyle \begin{aligned} &&P(A=\text{\resp{refuse}},\text{ }B=\text{\resp{shock}},\text{ }C=\text{\resp{shock}},\text{ }D=\text{\resp{shock}}) \\ &&\quad = P(A=\text{\resp{refuse}})\ P(B=\text{\resp{shock}})\ P(C=\text{\resp{shock}})\ P(D=\text{\resp{shock}}) \\ &&\quad = (0.35) (0.65) (0.65) (0.65) = (0.35)^1 (0.65)^3 = 0.096\end{aligned}} But there are three other scenarios: Brittany, Caroline, or Damian could have been the one to refuse. In each of these cases, the probability is again . These four scenarios exhaust all the possible ways that exactly one of these four people could refuse to administer the most severe shock, so the total probability is .

Verify that the scenario where Brittany is the only one to refuse to give the most severe shock has probability [29]

The binomial distribution

The scenario outlined in Example [oneRefuser] is a special case of what is called the binomial distribution. The describes the probability of having exactly successes in independent Bernoulli trials with probability of a success (in Example [oneRefuser], , , ). We would like to determine the probabilities associated with the binomial distribution more generally, i.e. we want a formula where we can use , , and to obtain the probability. To do this, we reexamine each part of the example.

There were four individuals who could have been the one to refuse, and each of these four scenarios had the same probability. Thus, we could identify the final probability as Échec de l’analyse (fonction inconnue « \begin{aligned} »): {\displaystyle \begin{aligned} [\text{\# of scenarios}] \times P(\text{single scenario}) \label{genBinomialFormula}\end{aligned}} The first component of this equation is the number of ways to arrange the successes among the trials. The second component is the probability of any of the four (equally probable) scenarios.

Consider single scenario under the general case of successes and failures in the trials. In any such scenario, we apply the Multiplication Rule for independent events: This is our general formula for single scenario.

Secondly, we introduce a general formula for the number of ways to choose successes in trials, i.e. arrange successes and failures: The quantity is read .[30] The exclamation point notation (e.g. ) denotes a [factorialDefinitionInTheBinomialSection] expression. Échec de l’analyse (fonction inconnue « \label »): {\displaystyle \begin{aligned} && 0! = 1 \label{zeroFactorial} \\ && 1! = 1 \\ && 2! = 2\times1 = 2 \\ && 3! = 3\times2\times1 = 6 \\ && 4! = 4\times3\times2\times1 = 24 \\ && \vdots \\ && n! = n\times(n-1)\times...\times3\times2\times1\end{aligned}} Using the formula, we can compute the number of ways to choose successes in trials: This result is exactly what we found by carefully thinking of each possible scenario in Example [oneRefuser].

Substituting choose for the number of scenarios and for the single scenario probability in Equation ([genBinomialFormula]) yields the general binomial formula.

Suppose the probability of a single trial being a success is . Then the probability of observing exactly successes in independent trials is given by

Échec de l’analyse (fonction inconnue « \label »): {\displaystyle \begin{aligned} {n\choose k}p^k(1-p)^{n-k} = \frac{n!}{k!(n-k)!}p^k(1-p)^{n-k} \label{binomialFormula}\end{aligned}}

Additionally, the mean, variance, and standard deviation of the number of observed successes are

Échec de l’analyse (fonction inconnue « \label »): {\displaystyle \begin{aligned} \mu &= np &\sigma^2 &= np(1-p) &\sigma &= \sqrt{np(1-p)} \label{binomialStats}\end{aligned}}

(1) The trials are independent.
(2) The number of trials, , is fixed.
(3) Each trial outcome can be classified as a success or failure.
(4) The probability of a success, , is the same for each trial.

What is the probability that 3 of 8 randomly selected students will refuse to administer the worst shock, i.e. 5 of 8 will? We would like to apply the binomial model, so we check our conditions. The number of trials is fixed () (condition 2) and each trial outcome can be classified as a success or failure (condition 3). Because the sample is random, the trials are independent (condition 1) and the probability of a success is the same for each trial (condition 4).

In the outcome of interest, there are successes in trials, and the probability of a success is . So the probability that 3 of 8 will refuse is given by Dealing with the factorial part: Using , the final probability is about .

The first step in using the binomial model is to check that the model is appropriate. The second step is to identify , , and . The final step is to apply the formulas and interpret the results.

In general, it is useful to do some cancelation in the factorials immediately. Alternatively, many computer programs and calculators have built in functions to compute choose , factorials, and even entire binomial probabilities.

If you ran a study and randomly sampled 40 students, how many would you expect to refuse to administer the worst shock? What is the standard deviation of the number of people who would refuse? Equation ([binomialStats]) may be useful.[31]

The probability that a random smoker will develop a severe lung condition in his or her lifetime is about . If you have 4 friends who smoke, are the conditions for the binomial model satisfied?[32]

[noMoreThanOneFriendWSevereLungCondition]Suppose these four friends do not know each other and we can treat them as if they were a random sample from the population. Is the binomial model appropriate? What is the probability that (a) none of them will develop a severe lung condition? (b) One will develop a severe lung condition? (c) That no more than one will develop a severe lung condition?[33]

What is the probability that at least 2 of your 4 smoking friends will develop a severe lung condition in their lifetimes?[34]

Suppose you have 7 friends who are smokers and they can be treated as a random sample of smokers. (a) How many would you expect to develop a severe lung condition, i.e. what is the mean? (b) What is the probability that at most 2 of your 7 friends will develop a severe lung condition.[35]

Next we consider the first term in the binomial probability, choose under some special scenarios.

Why is it true that and for any number ?[36]

How many ways can you arrange one success and failures in trials? How many ways can you arrange successes and one failure in trials?[37]

Normal approximation to the binomial distribution

The binomial formula is cumbersome when the sample size () is large, particularly when we consider a range of observations. In some cases we may use the normal distribution as an easier and faster way to estimate binomial probabilities.

Approximately 20% of the US population smokes cigarettes. A local government believed their community had a lower smoker rate and commissioned a survey of 400 randomly selected individuals. The survey found that only 59 of the 400 participants smoke cigarettes. If the true proportion of smokers in the community was really 20%, what is the probability of observing 59 or fewer smokers in a sample of 400 people?[exactBinomialForN400P20SmokerExample] We leave the usual verification that the four conditions for the binomial model are valid as an exercise.

The question posed is equivalent to asking, what is the probability of observing , 1, ..., 58, or 59 smokers in a sample of when ? We can compute these 60 different probabilities and add them together to find the answer: If the true proportion of smokers in the community is , then the probability of observing 59 or fewer smokers in a sample of is less than 0.0041.

The computations in Example [exactBinomialForN400P20SmokerExample] are tedious and long. In general, we should avoid such work if an alternative method exists that is faster, easier, and still accurate. Recall that calculating probabilities of a range of values is much easier in the normal model. We might wonder, is it reasonable to use the normal model in place of the binomial distribution? Surprisingly, yes, if certain conditions are met.

Here we consider the binomial model when the probability of a success is . Figure [fourBinomialModelsShowingApproxToNormal] shows four hollow histograms for simulated samples from the binomial distribution using four different sample sizes: . What happens to the shape of the distributions as the sample size increases? What distribution does the last hollow histogram resemble?[38]

Fichier:Ch distributions/figures/fourBinomialModelsShowingApproxToNormal/fourBinomialModelsShowingApproxToNormal
caption Hollow histograms of samples from the binomial model when . The sample sizes for the four plots are , 30, 100, and 300, respectively.

The binomial distribution with probability of success is nearly normal when the sample size is sufficiently large that and are both at least 10. The approximate normal distribution has parameters corresponding to the mean and standard deviation of the binomial distribution:

The normal approximation may be used when computing the range of many possible successes. For instance, we may apply the normal distribution to the setting of Example [exactBinomialForN400P20SmokerExample].

How can we use the normal approximation to estimate the probability of observing 59 or fewer smokers in a sample of 400, if the true proportion of smokers is ? [approxBinomialForN400P20SmokerExample] Showing that the binomial model is reasonable was a suggested exercise in Example [exactBinomialForN400P20SmokerExample]. We also verify that both and are at least 10: With these conditions checked, we may use the normal approximation in place of the binomial distribution using the mean and standard deviation from the binomial model: We want to find the probability of observing fewer than 59 smokers using this model.

Use the normal model to estimate the probability of observing fewer than 59 smokers. Your answer should be approximately equal to the solution of Example [exactBinomialForN400P20SmokerExample]: 0.0041.[39]

The normal approximation breaks down on small intervals

The normal approximation may fail on small intervals The normal approximation to the binomial distribution tends to perform poorly when estimating the probability of a small range of counts, even when the conditions are met.

Suppose we wanted to compute the probability of observing 69, 70, or 71 smokers in 400 when . With such a large sample, we might be tempted to apply the normal approximation and use the range 69 to 71. However, we would find that the binomial solution and the normal approximation notably differ: We can identify the cause of this discrepancy using Figure [normApproxToBinomFail], which shows the areas representing the binomial probability (outlined) and normal approximation (shaded). Notice that the width of the area under the normal distribution is 0.5 units too slim on both sides of the interval.

Fichier:Ch distributions/figures/normApproxToBinomFail/normApproxToBinomFail
caption A normal curve with the area between 69 and 71 shaded. The outlined area represents the exact binomial probability.

The normal approximation to the binomial distribution for intervals of values is usually improved if cutoff values are modified slightly. The cutoff values for the lower end of a shaded region should be reduced by 0.5, and the cutoff value for the upper end should be increased by 0.5.

The tip to add extra area when applying the normal approximation is most often useful when examining a range of observations. While it is possible to apply it when computing a tail area, the benefit of the modification usually disappears since the total interval is typically quite wide.

More discrete distributions (special topic)

Negative binomial distribution

The geometric distribution describes the probability of observing the first success on the trial. The is more general: it describes the probability of observing the success on the trial.

Each day a high school football coach tells his star kicker, Brian, that he can go home after he successfully kicks four 35 yard field goals. Suppose we say each kick has a probability of being successful. If is small – e.g. close to 0.1 – would we expect Brian to need many attempts before he successfully kicks his fourth field goal? We are waiting for the fourth success (). If the probability of a success () is small, then the number of attempts () will probably be large. This means that Brian is more likely to need many attempts before he gets successes. To put this another way, the probability of being small is low.

To identify a negative binomial case, we check 4 conditions. The first three are common to the binomial distribution.[40]

(1) The trials are independent.
(2) Each trial outcome can be classified as a success or failure.
(3) The probability of a success () is the same for each trial.
(4) The last trial must be a success.

Suppose Brian is very diligent in his attempts and he makes each 35 yard field goal with probability . Take a guess at how many attempts he would need before making his fourth kick.[41]

In yesterday’s practice, it took Brian only 6 tries to get his fourth field goal. Write out each of the possible sequence of kicks. [eachSeqOfSixTriesToGetFourSuccesses] Because it took Brian six tries to get the fourth success, we know the last kick must have been a success. That leaves three successful kicks and two unsuccessful kicks (we label these as failures) that make up the first five attempts. There are ten possible sequences of these first five kicks, which are shown in Table [successFailureOrdersForBriansFieldGoals]. If Brian achieved his fourth success () on his sixth attempt (), then his order of successes and failures must be one of these ten possible sequences.

c|c ccc cl | r
& & 1 & 2 & 3 & 4 &
1&& & & & & &
2&& & & & & &
3&& & & & & &
4&& & & & & &
5&& & & & & &
6&& & & & & &
7&& & & & & &
8&& & & & & &
9&& & & & & &
10&& & & & & &


[probOfEachSeqOfSixTriesToGetFourSuccesses] Each sequence in Table [successFailureOrdersForBriansFieldGoals] has exactly two failures and four successes with the last attempt always being a success. If the probability of a success is , find the probability of the first sequence.[42]

If the probability Brian kicks a 35 yard field goal is , what is the probability it takes Brian exactly six tries to get his fourth successful kick? We can write this as Échec de l’analyse (fonction inconnue « \begin{aligned} »): {\displaystyle \begin{aligned} &P(\text{it takes Brian six tries to make four field goals}) \\ & \quad = P(\text{Brian makes three of his first five field goals, and he makes the sixth one}) \\ & \quad = P(\text{$1^{st}$ sequence OR $2^{nd}$ sequence OR ... OR $10^{th}$ sequence})\end{aligned}} where the sequences are from Table [successFailureOrdersForBriansFieldGoals]. We can break down this last probability into the sum of ten disjoint possibilities: Échec de l’analyse (fonction inconnue « \begin{aligned} »): {\displaystyle \begin{aligned} &P(\text{$1^{st}$ sequence OR $2^{nd}$ sequence OR ... OR $10^{th}$ sequence}) \\ &\quad = P(\text{$1^{st}$ sequence}) + P(\text{$2^{nd}$ sequence}) + \cdots + P(\text{$10^{th}$ sequence})\end{aligned}} The probability of the first sequence was identified in Guided Practice [probOfEachSeqOfSixTriesToGetFourSuccesses] as 0.0164, and each of the other sequences have the same probability. Since each of the ten sequence has the same probability, the total probability is ten times that of any individual sequence.

The way to compute this negative binomial probability is similar to how the binomial problems were solved in Section [binomialModel]. The probability is broken into two pieces: Each part is examined separately, then we multiply to get the final result.

We first identify the probability of a single sequence. One particular case is to first observe all the failures ( of them) followed by the successes: Échec de l’analyse (fonction inconnue « \begin{aligned} »): {\displaystyle \begin{aligned} &P(\text{Single sequence}) \\ &= P(\text{$n-k$ failures and then $k$ successes}) \\ &= (1-p)^{n-k} p^{k}\end{aligned}}

We must also identify the number of sequences for the general case. Above, ten sequences were identified where the fourth success came on the sixth attempt. These sequences were identified by fixing the last observation as a success and looking for all the ways to arrange the other observations. In other words, how many ways could we arrange successes in trials? This can be found using the choose coefficient but for and instead: This is the number of different ways we can order successes and failures in trials. If the factorial notation (the exclamation point) is unfamiliar, see page .

The negative binomial distribution describes the probability of observing the success on the trial: Échec de l’analyse (MathML avec SVG ou PNG en secours (recommandé pour les navigateurs modernes et les outils d’accessibilité) : réponse non valide(« Math extension cannot connect to Restbase. ») du serveur « https://en.wikipedia.org/api/rest_v1/ » :): {\displaystyle \begin{aligned} P(\text{the $k^{th}$ success on the $n^{th}$ trial}) = {n-1 \choose k-1} p^{k}(1-p)^{n-k} \label{negativeBinomialEquation}\end{aligned}} where is the probability an individual trial is a success. All trials are assumed to be independent.

Show using Equation  that the probability Brian kicks his fourth successful field goal on the sixth attempt is 0.164. The probability of a single success is , the number of successes is , and the number of necessary attempts under this scenario is .

The negative binomial distribution requires that each kick attempt by Brian is independent. Do you think it is reasonable to suggest that each of Brian’s kick attempts are independent?[43]

Assume Brian’s kick attempts are independent. What is the probability that Brian will kick his fourth field goal within 5 attempts?[44]

In the binomial case, we typically have a fixed number of trials and instead consider the number of successes. In the negative binomial case, we examine how many trials it takes to observe a fixed number of successes and require that the last observation be a success.

On 70% of days, a hospital admits at least one heart attack patient. On 30% of the days, no heart attack patients are admitted. Identify each case below as a binomial or negative binomial case, and compute the probability.[45]

(a) What is the probability the hospital will admit a heart attack patient on exactly three days this week?

(b) What is the probability the second day with a heart attack patient will be the fourth day of the week?

(c) What is the probability the fifth day of next month will be the first day with a heart attack patient?

Poisson distribution

There are about 8 million individuals in New York City. How many individuals might we expect to be hospitalized for acute myocardial infarction (AMI), i.e. a heart attack, each day? According to historical records, the average number is about 4.4 individuals. However, we would also like to know the approximate distribution of counts. What would a histogram of the number of AMI occurrences each day look like if we recorded the daily counts over an entire year? [amiIncidencesEachDayOver1YearInNYCExample] A histogram of the number of occurrences of AMI on 365 days for NYC is shown in Figure [amiIncidencesOver100Days].[46] The sample mean (4.38) is similar to the historical average of 4.4. The sample standard deviation is about 2, and the histogram indicates that about 70% of the data fall between 2.4 and 6.4. The distribution’s shape is unimodal and skewed to the right.

Fichier:Ch distributions/figures/amiIncidencesOver100Days/amiIncidencesOver100Days
caption A histogram of the number of occurrences of AMI on 365 separate days in NYC.

The is often useful for estimating the number of events in a large population over a unit of time. For instance, consider each of the following events:

  • having a heart attack,
  • getting married, and
  • getting struck by lightning.

The Poisson distribution helps us describe the number of such events that will occur in a short unit of time for a fixed population if the individuals within the population are independent.

The histogram in Figure [amiIncidencesOver100Days] approximates a Poisson distribution with rate equal to 4.4. The for a Poisson distribution is the average number of occurrences in a mostly-fixed population per unit of time. In Example [amiIncidencesEachDayOver1YearInNYCExample], the time unit is a day, the population is all New York City residents, and the historical rate is 4.4. The parameter in the Poisson distribution is the rate – or how many events we expect to observe – and it is typically denoted by (the Greek letter lambda) or . Using the rate, we can describe the probability of observing exactly events in a single unit of time.

Suppose we are watching for events and the number of observed events follows a Poisson distribution with rate . Then Échec de l’analyse (fonction inconnue « \begin{aligned} »): {\displaystyle \begin{aligned} P(\text{observe $k$ events}) = \frac{\lambda^{k} e^{-\lambda}}{k!}\end{aligned}} where may take a value 0, 1, 2, and so on, and represents -factorial, as described on page . The letter is the base of the natural logarithm. The mean and standard deviation of this distribution are and , respectively.

We will leave a rigorous set of conditions for the Poisson distribution to a later course. However, we offer a few simple guidelines that can be used for an initial evaluation of whether the Poisson model would be appropriate.

A random variable may follow a Poisson distribution if we are looking for the number of events, the population that generates such events is large, and the events occur independently of each other.

Even when events are not really independent – for instance, Saturdays and Sundays are especially popular for weddings – a Poisson model may sometimes still be reasonable if we allow it to have a different rate for different times. In the wedding example, the rate would be modeled as higher on weekends than on weekdays. The idea of modeling rates for a Poisson distribution against a second variable such as forms the foundation of some more advanced methods that fall in the realm of . In Chapters [linRegrForTwoVar] and [multipleAndLogisticRegression], we will discuss a foundation of linear models.

  1. It is also introduced as the Gaussian distribution after Frederic Gauss, the first person to formalize its mathematical expression.
  2. (a) . (b) . (c) .
  3. (a) Its Z-score is given by . (b) The observation is 1.095 standard deviations above the mean. We know it must be above the mean since is positive.
  4. For mm: . For mm: .
  5. Because the absolute value of Z-score for the second observation is larger than that of the first, the second observation has a more unusual head length.
  6. If 84% had lower scores than Ann, the proportion of people who had better scores must be 16%. (Generally ties are ignored when the normal model, or any other continuous distribution, is used.)
  7. We found the probability in Example [satAbove1630Exam]: 0.6664. A picture for this exercise is represented by the shaded area below “0.6664” in Example [satAbove1630Exam].
  8. If Edward did better than 37% of SAT takers, then about 63% must have done better than him.
    image
  9. Numerical answers: (a) 0.9772. (b) 0.0228.
  10. This sample was taken from the USDA Food Commodity Intake Database.
  11. First put the heights into inches: 67 and 76 inches. Figures are shown below. (a) . (b) .
    image
  12. Remember: draw a picture first, then find the Z-score. (We leave the pictures to you.) The Z-score can be found by using the percentiles and the normal probability table. (a) We look for 0.95 in the probability portion (middle part) of the normal probability table, which leads us to row 1.6 and (about) column 0.05, i.e. . Knowing , , and , we setup the Z-score formula: . We solve for : . (b) Similarly, we find , again setup the Z-score formula for the heights, and calculate .
  13. Numerical answers: (a) 0.1131. (b) 0.3821.
  14. This is an abbreviated solution. (Be sure to draw a figure!) First find the percent who get below 1500 and the percent that get above 2000: (area below), (area above). Final answer: .
  15. 5’5” is 65 inches. 5’7” is 67 inches. Numerical solution: , i.e. 11.68%.
  16. First draw the pictures. To find the area between and , use the normal probability table to determine the areas below and above . Next verify the area between and is about 0.68. Repeat this for to and also for to .
  17. (a) 900 and 2100 represent two standard deviations above and below the mean, which means about 95% of test takers will score between 900 and 2100. (b) Since the normal model is symmetric, then half of the test takers from part (a) ( of all test takers) will score 900 to 1500 while 47.5% score between 1500 and 2100.
  18. Also commonly called a .
  19. These data were collected from .
  20. Answers may vary a little. The top-left plot shows some deviations in the smallest values in the data set; specifically, the left tail of the data set has some outliers we should be wary of. The top-right and bottom-left plots do not show any obvious or extreme deviations from the lines for their respective sample sizes, so a normal model would be reasonable for these data sets. The bottom-right plot has a consistent curvature that suggests it is not from the normal distribution. If we examine just the vertical coordinates of these observations, we see that there is a lot of data between -20 and 0, and then about five observations scattered between 0 and 70. This describes a distribution that has a strong right skew.
  21. Examine where the points fall along the vertical axis. In the first plot, most points are near the low end with fewer observations scattered along the high end; this describes a distribution that is skewed to the high end. The second plot shows the opposite features, and this distribution is skewed to the low end.
  22. Find further information on Milgram’s experiment at     .
  23. If is the true probability of a success, then the mean of a Bernoulli random variable is given by Similarly, the variance of can be computed: The standard deviation is .
  24. This is hypothetical since, in reality, this sort of study probably would not be permitted any longer under current ethical standards.
  25. We would expect to see about individuals to find the first success.
  26. First find the probability of the complement: no success in first 4 trials. Next, compute one minus this probability: no success in 4 trials.
  27. No. The geometric distribution is always right skewed and can never be well-approximated by the normal model.
  28. Échec de l’analyse (erreur de syntaxe): {\displaystyle P(A=\text{\resp{shock}},\text{ }B=\text{\resp{refuse}},\text{ }C=\text{\resp{shock}},\text{ }D=\text{\resp{shock}}) = (0.65)(0.35)(0.65)(0.65) = (0.35)^1(0.65)^3} .
  29. Other notation for choose includes , , and .
  30. We are asked to determine the expected number (the mean) and the standard deviation, both of which can be directly computed from the formulas in Equation ([binomialStats]): and . Because very roughly 95% of observations fall within 2 standard deviations of the mean (see Section [variability]), we would probably observe at least 8 but less than 20 individuals in our sample who would refuse to administer the shock.
  31. One possible answer: if the friends know each other, then the independence assumption is probably not satisfied. For example, acquaintances may have similar smoking habits.
  32. To check if the binomial model is appropriate, we must verify the conditions. (i) Since we are supposing we can treat the friends as a random sample, they are independent. (ii) We have a fixed number of trials (). (iii) Each outcome is a success or failure. (iv) The probability of a success is the same for each trials since the individuals are like a random sample ( if we say a “success” is someone getting a lung condition, a morbid choice). Compute parts (a) and (b) from the binomial formula in Equation : , . Note: , as shown on page . Part (c) can be computed as the sum of parts (a) and (b): . That is, there is about a 65% chance that no more than one of your four smoking friends will develop a severe lung condition.
  33. The complement (no more than one will develop a severe lung condition) as computed in Guided Practice [noMoreThanOneFriendWSevereLungCondition] as 0.6517, so we compute one minus this value: 0.3483.
  34. (a) . (b) 0, 1, or 2 develop severe lung condition.
  35. Frame these expressions into words. How many different ways are there to arrange 0 successes and failures in trials? (1 way.) How many different ways are there to arrange successes and 0 failures in trials? (1 way.)
  36. One success and failures: there are exactly unique places we can put the success, so there are ways to arrange one success and failures. A similar argument is used for the second question. Mathematically, we show these results by verifying the following two equations:
  37. The distribution is transformed from a blocky and skewed distribution into one that rather resembles the normal distribution in last hollow histogram
  38. Compute the Z-score first: . The corresponding left tail area is 0.0043.
  39. See a similar guide for the binomial distribution on page .
  40. One possible answer: since he is likely to make each field goal attempt, it will take him at least 4 attempts but probably not more than 6 or 7.
  41. The first sequence: .
  42. Answers may vary. We cannot conclusively say they are or are not independent. However, many statistical reviews of athletic performance suggests such attempts are very nearly independent.
  43. If his fourth field goal () is within five attempts, it either took him four or five tries ( or ). We have from earlier. Use Equation  to compute the probability of tries and tries, then add those probabilities together:
  44. In each part, . (a) The number of days is fixed, so this is binomial. The parameters are and : 0.097. (b) The last “success” (admitting a heart attack patient) is fixed to the last day, so we should apply the negative binomial distribution. The parameters are , : 0.132. (c) This problem is negative binomial with and : 0.006. Note that the negative binomial case when is the same as using the geometric distribution.
  45. These data are simulated. In practice, we should check for an association between successive days.