**Purpose of Statistics Package Exercises :**The Probability & Statistics course focuses on the processes you use to convert data into useful information. This involvesCollecting data,

Summarizing data, and

Interpreting data.

In addition to being able to apply these processes, you can learn how to use statistical software packages to help manage, summarize, and interpret data. The statistics package exercises included throughout the course provide you the opportunity to explore a dataset and answer questions based on the output using R, Statcrunch, TI Calculator, Minitab, or Excel. In each exercise, you can choose to view instructions for completing the activity in R, Statcrunch, TI Calculator, Minitab, or Excel, depending on which statistics package you choose to use.

The statistics package exercises are an extension of activities already embedded in the course and require you to use a statistics package to generate output and answer a different set of questions.

**To Download R**To download R, a free software environment for statistical computing and graphics, go to: https://www.r-project.org/ This link opens in a new tab and follow the instructions provided.

**Using R**Throughout the statistics package exercises, you will be given commands to execute in R. You can use the following steps to avoid having to type all of these commands in by hand:

Highlight the command with your mouse.

On the browser menu, click "Edit," then "Copy."

Click on the R command window, then at the top of the R window, click "Edit," then "Paste."

You may have to press

to execute the command. **R Version**The R instructions are current through version 3.2.5 released on April 14, 2016. Instructions in these statistics package exercises may not work with newer releases of R.

For help with installing R for MAC OS X or Windows click here

The purpose of this activity is to give you guided practice in checking whether the conditions that allow us to use the two-sample t-test are met.

**Background :**A researcher wanted to study whether or not men and women differ in the amount of time they watch TV during a week. In each of the following cases, you'll have to decide whether we can use the two-sample t-test to test this claim or not.**1. A random sample of 400 adults was chosen**(191 women and 209 men). At the end of the week, each of the 400 subjects reported the total amount of time (in minutes) that he or she watched TV during that week.**R Instructions****If you feel**that you need to look at the two samples using histograms, you can open R with the data set preloaded by right-clicking here and choosing "Save Target As" to download the file to your computer. Then find the downloaded file and double-click it to open it in R.**The data have been loaded into the data frame tv2**. The two variables in the data frame are time.men and time.women .**Create two histograms**to view the men's and women's data by modifying the following commands to add appropriate labels/titles.hist(tv2$time.men) hist(tv2$time.women)

**Explanation :**(i) Since the 400 subjects were chosen at random, we can assume that the two samples are independent. (ii) Since the sample sizes (191 and 209) are large, we can proceed with the two-sample t-test regardless of whether the populations are normal or not (and, thus, there is no need to look at the data using a histogram). In conclusion, we can reliably use the two-sample t-test in this case.

A random sample of 50 married couples was chosen, which was split into a sample of 50 men and a sample of 50 women. At the end of the week, each of the 100 subjects reported the total amount of time (in minutes) that he or she watched TV during that week.

**R Instructions****If you feel**that you need to look at the two samples using histograms, you can open R with the data set preloaded by right-clicking here and choosing "Save Target As" to download the file to your computer. Then find the downloaded file and double-click it to open it in R.**The data have been loaded into the data frame tv4**. The two variables in the data frame are time.men and time.women .**Create two histograms**to view the men's and women's data by modifying the following commands to add appropriate labels/titles.hist(tv4$time.men) hist(tv4$time.women)

**Explanation :**(i) This is a case where the two samples are not independent. Since each subject in one sample is linked (by marriage) to a subject in the other sample, these samples are dependent. The two-sample t-test is therefore not appropriate in this case.

The purpose of this activity is to give you guided practice in carrying out the two-sample t-test, and to show you how to use software to aid in the process.

**Background**A study was conducted at a large state university in order to compare the sleeping habits of undergraduate students to those of graduate students. Random samples of 75 undergraduate students and 50 graduate students were chosen and each of the subjects was asked to report the number of hours he or she sleeps in a typical day. The thought was that since undergraduate students are generally younger and party more during their years in school, they sleep less, on average, than graduate students. Do the data support this hypothesis? The following figure summarizes the problem:**Note that we defined:****μ1**the mean number of hours undergraduate students sleep in a typical day**μ2**the mean number of hours graduate students sleep in a typical day**Comment:**Before we move on to carry out the test, it is important to realize that in the two-sample problem, the data can be provided in three possible ways:**(i) Sample data in one column**, and another column that indicates which sample the observation belongs to. Recall that this is the way the data were given in our leading example (looks vs. personality score and gender):**Note that essentially**, one column contains the explanatory variable, and one contains the response.**(ii) Sample data in different columns**data from each of the two samples appear in a column dedicated to that category. As you'll see, this is the way the data are provided in this example:**(iii) Summarized data**we are not given the actual data, but just the data summaries: sample sizes, sample means and sample standard deviations of both samples. Recall that in our second example, the data were given in this format.**R Instructions****To carry out the test**, open R with the data set preloaded by right-clicking here and choosing "Save Target As" to download the file to your computer. Then find the downloaded file and double-click it to open it in R.**The data have been loaded into the data frame**sleep . The two variables in the data frame are undergraduate and graduate .**To carry out the t-test**, enter the command:t.test(sleep$undergraduate,sleep$graduate,alternative = "less")

**Note:**Using R, when we used t.test() for a one-sample t-test in a previous activity, we specified a one-sample data set, a hypothetical mean, and an alternative hypothesis.**To perform a two-sample t-test**, we use the same command, t.test() , but specify two sample data sets and an alternative hypothesis.**If the data set were structured**so the sample data is in one column (called sleep ) and another column that indicates which sample the observation belongs to (called student.type ), then the command would bet.test(sleep~student.type, alternative="less")

**Explanation :**The test statistic is t = -1.2304 and the p-value is 0.1106

**Explanation :**The p-value is not small (in particular, it is larger than 0.05), indicating that it is still reasonably likely (probability 0.111) to get data like those observed, or even more extreme data, under the null hypothesis (i.e., assuming that undergraduate and graduate students have the same mean sleeping hours). Therefore, the data do not provide evidence to reject Ho, and we cannot conclude that undergraduate students sleep less, on average, than graduate students.

The purpose of this activity is to give you guided practice in carrying out the paired t-test and to teach you how to obtain the paired t-test output using statistical software. Here is some background for the historically important data that we are going to work with in this activity.

**Background: Gosset's Seed Plot Data****William S. Gosset**was employed by the Guinness brewing company of Dublin. Sample sizes available for experimentation in brewing were necessarily small, and new techniques for handling the resulting data were needed. Gosset consulted Karl Pearson (1857-1936) of University College in London, who told him that the current state of knowledge was unsatisfactory. Gosset undertook a course of study under Pearson and the outcome of his study was perhaps the most famous paper in statistical literature, "The Probable Error of a Mean" (1908), which introduced the t distribution.**Since Gosset was contractually bound by Guinness**, he published under a pseudonym, "Student," hence the t distribution is often referred to as Student's t distribution.**As an example**to illustrate his analysis, Gosset reported in his paper on the results of seeding 11 different plots of land with two different types of seed: regular and kiln-dried. There is reason to believe that drying seeds before planting will increase plant yield. Since different plots of soil may be naturally more fertile, this confounding variable was eliminated by using the matched pairs design and planting both types of seed in all 11 plots.**The resulting data**(corn yield in pounds per acre) are as follows:**We are going**to use these data to test the hypothesis that kiln-dried seed yields more corn than regular seed. Here is a figure that summarizes this problem:**Because of the nature**of the experimental design (matched pairs), we are testing the difference in yield.**Note that the differences**were calculated: regular − kiln-dried.**R Instructions****To open R**with the data set preloaded, right-click here and choose "Save Target As" to download the file to your computer. Then find the downloaded file and double-click it to open it in R.**The data have been loaded into the data frame seed**. Enter the commandseed

**to see the data**. The variables in the data frame are regular.seed and kiln.dried.seed .**To carry out the paired t-test**, use the following command:t.test(seed$regular.seed, seed$kiln.dried.seed, alternative="less", paired=TRUE)

**The mean of the differences**is provided in the output in addition to the other pertinent information. Notice that the order of the variables indicates the order of the difference calculation (position 1 - position 2).

**Explanation :**The test statistic is -1.69 and the p-value is .061, indicating that there is a 6.1% chance of obtaining data like those observed (or even more extremely in favor of the alternative hypothesis) had there really been no difference between regular and kiln-dried seeds (as the null hypothesis claims). Even though the p-value is quite small, it is not small enough if we use a significance level (cut-off probability) of .05. This means that even though the data show some evidence against the null hypothesis, it isn't quite strong enough to reject it. We therefore conclude that the data do not provide enough evidence that kiln-dried seeds yield more corn than regular seeds. Comment: While it is true that at the .05 significance level, our p-value is not small enough to reject Ho, it is "almost small enough." In other words, this is sort of a "borderline case" where personal interpretation and/or judgment is in order. You can stick to the .05 cut-off as we did above in our conclusion, but you might decide that .061 is small enough for you, and that the evidence that the data provide is strong enough for you to believe that indeed kiln-dried seeds yield more corn. This is the beauty of statistics ... there is no "black or white," and there is a lot of room for personal interpretation

The purpose of this activity is to give you guided practice in carrying out the ANOVA F-test and to teach you how to obtain the ANOVA F-test's output using statistical software.

**Background: Critical Flicker Frequency (CFF), and Eye Color**There is various flickering light in our environment; for instance, light from computer screens and fluorescent bulbs. If the frequency of the flicker is below a certain threshold, the flicker can be detected by the eye. Different people have slightly different flicker "threshold" frequencies (known as the critical flicker frequency, or CFF). Knowing the critical threshold frequency below which flicker is detected can be important for product manufacturing as well as tests for ocular disease. Do people with different eye color have different threshold flicker sensitivity? A 1973 study This link opens in a new tab ("The Effect of Iris Color on Critical Flicker Frequency," Journal of General Psychology [1973], 91–95) obtained the following data from a random sample of 19 subjects.**Do these data suggest that people**with different eye color have different threshold sensitivity to flickering light? In other words, do the data suggest that threshold sensitivity to flickering light is related to eye color?**Comment:**We recommend that before starting, you create for yourself a figure that summarizes this problem, similar to the figures that we presented for the examples that we used in this part.**R Instructions****To open R with the data set preloaded**, right-click here and choose "Save Target As" to download the file to your computer. Then find the downloaded file and double-click it to open it in R.**The data have been loaded into the data frame flicker**. Enter the commandflicker

**to see the data**. The two variables in the data frame are color and cff .**Now use R to create side-by-side**boxplots of CFF by eye color, and supplement them with the descriptive statistics of CFF by eye color. Use the output to check whether the conditions that allow us to safely use the ANOVA F-test are met.**To do this in R**, enter the commands:boxplot(flicker$cff~flicker$color,xlab="Eye Color",ylab="CFF") tapply(flicker$cff, flicker$color, mean) tapply(flicker$cff, flicker$color, sd)

**Explanation :**Let's check the conditions: (i) We are told that the sample was chosen at random, so the three eye-color samples are independent. (ii) The sample sizes are quite low, but the boxplots do not display any extreme violation of the normality assumption in the form of extreme skewness or outliers. (iii) We can assume that the equal population standard deviation condition is met, since the rule of thumb is satisfied (1.843 / 1.365 is less than 2) In summary, we can safely proceed with the ANOVA F-test.

R Instructions

**For the next question**, we need to carry out the ANOVA F-test using R. To do this, we use the aov() command. Similar to the lm() command, the aov() command produces more output than we need, so we will save the output to a variable name and then use other commands to extract the information of interest. We choose here a generic name, cff.aov , but any name would work within the code as long as it is used throughout.cff.aov=aov(cff~color,flicker)

**Now we can extract the ANOVA table**from cff.aov using either summary() or anova() , which both return the same result.summary(cff.aov) anova(cff.aov)

**Note:**For more advanced analysis of assumptions for ANOVA, we can use functions such asplot(cff.aov,1)

**for residual plots**,plot(cff.aov,2)

**for normal QQ plots, and**residuals(cff.aov)

**to extract the residuals**.

**Explanation :**The test statistic F is 4.8 (which is quite large), and the p-value is .023, indicating that it is unlikely (probability of .023) to get data like those observed assuming that CFF is not related to eye color (as the null hypothesis claims). Since the p-value is small (in particular, smaller than .05), we have enough evidence in the data to reject Ho and conclude that the mean CFFs in the three eye-color populations are not all the same. In other words, we can conclude that CFF is related to eye color.