• Sample Surveys : A sample survey is a particular type of observational study in which individuals report variables' values themselves, frequently by giving their opinions. Researchers have several options to choose from when deciding how to survey the individuals involved: in person, or via telephone, Internet, or mail.


  • The following issues in the design of sample surveys will be discussed:


    1. open vs. closed questions


    2. unbalanced response options


    3. leading questions


    4. planting ideas with questions


    5. complicated questions


    6. sensitive questions


  • Suppose you want to determine the musical preferences of all students at your university, based on a sample of students. In the Sampling section, we discussed various ways to obtain the sample, such as taking a simple random sample from all students at the university, then contacting the chosen subjects via email to request their responses and following up with a second email to those who did not respond the first time.


  • This method would ensure a sample that is fairly representative of the entire population of students at the university, and avoids the bias that might result from a flawed designs such as a convenience sample or a volunteer sample.


  • However, even if we managed to select a representative sample for a survey, we are not yet home free: we must still compose the survey question itself so that the information we gather from the sampled students correctly represents what is true about their musical preferences. Let us consider some possibilities:


  • Question: "What is your favorite kind of music?" This is what we call an open question, which allows for almost unlimited responses. It may be difficult to make sense of all the possible categories and subcategories of music that survey respondents could come up with. Some may be more general than what you had in mind ("I like modern music the best") and others too specific ("I like Japanese alternative electronic rock by Cornelius"). Responses are much easier to handle if they come from a closed question:


  • Question: Which of these types of music do you prefer: classical, rock, pop, or hip-hop? What will happen if a respondent is asked the question as worded above, and he or she actually prefers jazz or folk music or gospel? He or she may pick a second-favorite from the options presented, or try to pencil in the real preference, or may just not respond at all. Whatever the outcome, it is likely that overall, the responses to the question posed in this way will not give us very accurate information about general music preferences. If a closed question is used, then great care should be taken to include all the reasonable options that are possible, including "not sure." Also, in case an option was overlooked, "other:___________" should be included for the sake of thoroughness.


  • Question: How do you feel about classical music? Circle one of these: I love it, I like it very much, I like it, I don't like it, I hate it. Notice that the options provided are rather "top-heavy," with three favorable options vs. two unfavorable. If someone feels somewhat neutral, they may opt for the middle choice, "I like it," and a summary of the survey's results would distort the respondents' true opinions.


  • Question: "Do you agree that classical music is the best type of music, because it has survived for centuries and is not only enjoyable, but also intellectually rewarding? (Answer yes or no.)" This sort of wording puts ideas in people's heads, urging them to report a particular opinion. One way to test for bias in a survey question is to ask yourself, "Just from reading the question, would a respondent have a good idea of what response the surveyor is hoping to elicit?" If the answer is yes, then the question should have been worded more neutrally.


  • Question: In the year 2002, there was much controversy over the fact that the Augusta National Golf Club, which hosts the Masters Golf Tournament each year, does not accept women as members. Defenders of the club created a survey that included the following statements. Respondents were supposed to indicate whether they agreed or disagreed with each statement:


    1. "The First Amendment of the U.S. Constitution applies to everyone regardless of gender, race, religion, age, profession, or point of view."


    2. "The First Amendment protects the right of individuals to create a private organization consisting of a specific group of people based on age, gender, race, ethnicity, or interest."


    3. "The First Amendment protects the right of organizations like the Boy Scouts, the Girls Scouts, and the National Association for the Advancement of Colored People to exist."


    4. "Individuals have a right to join a private group, club, or organization that consists of people who share the same interests and personal backgrounds as they do if they so desire."


    5. "Private organizations that are not funded by the government should be allowed to decide who becomes a member and who does not become a member on their own, without being forced to take input from other outside people or organizations."


  • Notice how the first and second statements steer people to favor the opinion that specialized groups may form private clubs. The third statement reminds people of organizations that are formed by groups on the basis of gender and race, setting the stage for them to agree with the fourth statement, which supports people's rights to join any private club. This in turn leads into the fifth statement, which focuses on a private organization's right to decide on its membership. As a group, the questions attempt to relentlessly steer a respondent towards ultimately agreeing with the club's right to exclude women.


  • Question: "Do you agree or disagree with this statement: 'I don't go out of my way to listen to modern music unless there are elements of jazz, or else lyrics that are clear and make sense.'" Put yourself in the place of people who enjoy jazz and straightforward lyrics, but don't have an issue with music being "too modern," per se. The logic of the question (or lack thereof) may escape the respondents, and they would be too confused to supply an answer that correctly conveys their opinion. Clearly, simple questions are much better than complicated ones; rather than try to gauge opinions on several issues at once, complex survey questions like this should be broken down into shorter, more concise ones.


  • Question1: "Have you eaten rutabagas in the past year?" If respondents answer no, then we have good reason to believe that they did not eat rutabagas in the past year.


  • Question2: "Have you used illegal drugs in the past year?" If respondents answer no, then it is still a possibility that they did use illegal drugs, but didn't want to admit it.


  • Effective techniques for collecting accurate data on sensitive questions are a main area of inquiry in statistics. One simple method is randomized response, which allows individuals in the sample to answer anonymously, while the researcher still gains information about the population. This technique is best illustrated by an example.


  • Example : For the question, "Have you used illegal drugs in the past year?" respondents are told to flip a fair coin (in private) before answering and then answer based on the result of the coin flip: if the coin flip results in "Heads," they should answer "Yes" (regardless of the truth), if a coin flip results in "Tails," they should answer truthfully. Thus, roughly half of the respondents are "truth-tellers," and the other half give the uncomfortable answer "Yes," without the interviewer's knowledge of who is in which group. The respondent who flips "Tails" and answers truthfully knows that he or she cannot be distinguished from someone who got "Heads" in the coin toss. Hopefully, this is enough to encourage respondents to answer truthfully. As we will learn later in the course, the surveyor can then use probability methods to estimate the proportion of respondents who admit they used illegal drugs in this scenario, while being unable to identify exactly which respondents have been drug abusers.


  • Besides using the randomized response method, surveyors may encourage honest answers from respondents in various other ways. Tactful wording of questions can be very helpful. Giving people a feeling of anonymity by having them complete questionnaires via computer, rather than paper and pencil, is another commonly used technique.


  • Summarize : A sample survey is a type of observational study in which respondents assess variables' values (often by giving an opinion).


    1. Open questions are less restrictive, but responses are more difficult to summarize.


    2. Closed questions may be biased by the options provided.


    3. Closed questions should permit options such as "other:______" and/or "not sure" if those options may apply.


    4. Questions should be worded neutrally.


    5. Earlier questions should not deliberately influence responses to later questions.


    6. Questions shouldn't be confusing or complicated.


  • Survey method and questions should be carefully designed to elicit honest responses if there are sensitive issues involved.




  1. Explanation :
    Closed questions should permit options such as "other:______" and/or "not sure" if those options may apply. In this case, the individuals may not recall how often they are aware of the colors in their dreams.




  1. Explanation :
    Indeed, the question is biased toward a favorable summary, since there are two positive answers and only one negative one




  1. Explanation :
    Indeed, this is the question that is the least sensitive.




  1. Explanation :
    Indeed, the question is worded in a way that highlights negative facts about the availability of guns.






  • In an experiment, it is the researchers who assign values of the explanatory variable to the participants. The key to ensuring that individuals differ only with respect to explanatory values—which is also the key to establishing causation—lies in the way this assignment is carried out.


  • In our discussion of the distinction between observational studies and experiments, we described the following experiment: collect a representative sample of 1,000 individuals from the population of smokers who are just now trying to quit. We divide the sample into 4 groups of 250 and instruct each group to use a different method to quit. One year later, we contact the same 1,000 individuals and determine whose attempts succeeded while using our designated method.


  • This was an experiment, because the researchers themselves determined the values of the explanatory variable of interest for the individuals studied, rather than letting them choose.


  • We will begin by using the context of this smoking cessation example to illustrate the specialized vocabulary of experiments. First of all, the explanatory variable, or factor, in this case is the method used to quit. The different imposed values of the explanatory variable, or treatments (common abbreviation: ttt), consist of the four possible quitting methods.


  • The groups receiving different treatments are called treatment groups. The group that tries to quit without drugs or therapy could be called the control group—those individuals on whom no specific treatment was imposed. Ideally, the subjects (human participants in an experiment) in each treatment group differ from those in the other treatment groups only with respect to the treatment (quitting method).


  • As mentioned in our discussion of why lurking variables prevent us from establishing causation in observational studies, eliminating all other differences among treatment groups will be the key to asserting causation via an experiment. How can this be accomplished?


  • Randomized Controlled Experiments : Your intuition may already tell you, correctly, that random assignment to treatments is the best way to prevent treatment groups of individuals from differing from each other in ways other than the treatment assigned.


  • Either computer software or tables can be utilized to accomplish the random assignment. The resulting design is called a randomized controlled experiment, because researchers control values of the explanatory variable with a randomization procedure. Under random assignment, the groups should not differ significantly with respect to any potential lurking variable. Then, if we see a relationship between the explanatory and response variables, we have evidence that it is a causal one.


  • Producing Data


  • A visual representation of the experimental study. A large circle represents the entire population. Through random selection we generate the sample, which is represented as a smaller circle. The circle representing the samples is divided up evenly into 4 pieces, each piece representing one value of the explanatory variable (method). The pieces are treatment groups randomly assigned by researchers.


  • Comment : Note that in a randomized controlled experiment, a randomization procedure may be used in two phases. First, a sample of subjects is collected. Ideally it would be a random sample so that it would be perfectly representative of the entire population.


  • (Comment: often researchers have no choice but to recruit volunteers. Using volunteers may help to offset one of the drawbacks to experimentation which will be discussed later, namely the problem of noncompliance.) Second, we assign individuals randomly to the treatment groups to ensure that the only difference between them will be due to the treatment and we can get evidence of causation. At this stage, randomization is vital.


  • Inclusion of a Control Group : A common misconception is that an experiment must include a control group of individuals receiving no treatment. There may be situations where a complete lack of treatment is not an option, or where including a control group is ethically questionable, or where researchers explore the effects of a treatment without making a comparison. Here are a few examples:


  • Example : If doctors want to conduct an experiment to determine whether Prograf or Cyclosporin is more effective as an immunosuppressant, they could randomly assign transplant patients to take one or the other of the drugs. It would, of course, be unethical to include a control group of patients not receiving any immunosuppressants.


  • Example : Recently, experiments have been conducted in which the treatment is a highly invasive brain surgery. The only way to have a legitimate control group in this case is to randomly assign half of the subjects to undergo the entire surgery except for the actual treatment component (inserting stem cells into the brain). This, of course, is also ethically problematic (but, believe it or not, is being done).


  • Example : There may even be an experiment designed with only a single treatment. For example, makers of a new hair product may ask a sample of individuals to treat their hair with that product over a period of several weeks, then assess how manageable their hair has become. Such a design is clearly flawed because of the absence of a comparison group, but it is still an experiment because use of the product has been imposed by its manufacturers, rather than chosen naturally by the individuals. A flawed experiment is nevertheless an experiment.


  • Comment: The word control is used in at least three different senses. In the context of observational studies, we control for a confounding variable by separating it out. Referring to an experiment as a controlled experiment stresses that the values of the experiment's explanatory variables (factors) have been assigned by researchers, as opposed to having occurred naturally. In the context of experiments, the control group consists of subjects who do not receive a treatment, but who are otherwise handled identically to those who do receive the treatment.



Scenario: New Software Versions


  • A local internet service provider (ISP) created two new versions of its software, with alternative ways of implementing a new feature. To find the product that would lead to the highest satisfaction among customers, the ISP conducted an experiment comparing users' preferences for the two new versions versus the existing software.


  • The ISP ideally wants to find out which of the three software products causes the highest user satisfaction. It has identified three major potential lurking variables that might affect user satisfaction—gender, age, and hours per week of computer use.


  • The dataset contains the values of the three possible lurking variables:


  • age: in years


  • gender: female or male


  • comp: hours per week of computer use


  • The ISP randomly selects 450 subjects and assigns them to treatment groups, one for each of the three versions of the ISP's software.




  1. Explanation :
    Indeed, in this scenario we'll never be able to know whether the high level of satisfaction with the current product is because users really like it, or because of the age of those who choose to use it, because older people will tend to choose it, and they usually have a high level of satisfaction of any product.



  1. Explanation :
    Indeed, in this scenario, we'll never be able to know whether the high level of satisfaction with the new software is because users really like it, or because of the gender of those who choose to use it, because males will tend to use this software due to its very technical name, and males also usually have high levels of satisfaction.



  1. Explanation :
    Indeed, in this scenario, we'll never know whether the low level of satisfaction with this new software is because users really did not like it, or because those who choose it spend a lot of time on the computer, because people who tend to spend a lot of time on the computer will tend to choose this software due to its name, and those people also tend to have low levels of satisfaction.