Asking the same questions at different points in time allows us to report on changes in the overall views of the general public or a subset of the public, such as registered voters, men or African Americans. When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously see question wording and question order for further information.
All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current poll and previous polls in which the question was asked. One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.
For example, in a poll conducted after the presidential election in , people responded very differently to two versions of this question: In the closed-ended version, respondents were provided five options and could volunteer an option not on the list. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version.
Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions that include the most common responses as answer choices.
In this way, the questions may better reflect what the public is thinking or how they view a particular issue. When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research poll conducted in January Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time.
When the question is asking about an objective fact, such as the religious affiliation of the respondent, more categories can be used. Most respondents have no trouble with this question because they can just wait until they hear their religious tradition read to respond. What is your present religion, if any? In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions.
Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way.
Even small wording differences can substantially affect the answers people provide. An example of a wording difference that had a significant impact on responses comes from a January Pew Research Center survey.
The introduction of U. There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked.
The issues related to question wording are more numerous than can be treated adequately in this short space. Here are a few of the important things to consider in crafting survey questions:.
First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide an issue or problem, a month, number of days, etc. Closed-ended questions should include all reasonable responses i. It is also important to ask only one question at a time.
In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy. In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question.
Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements.
A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make.
Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers among better- and lesser-educated respondents also tends to be very different. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias; they also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election.
In cross-sectional studies, a sample or samples is drawn from the relevant population and studied once. A successive independent samples design draws multiple random samples from a population at one or more times. Such studies cannot, therefore, identify the causes of change over time necessarily. For successive independent samples designs to be effective, the samples must be drawn from the same population, and must be equally representative of it.
If the samples are not comparable, the changes between samples may be due to demographic characteristics rather than time. In addition, the questions must be asked in the same way so that responses can be compared directly. Longitudinal studies take measure of the same random sample at multiple time points. Longitudinal studies are the easiest way to assess the effect of a naturally occurring event, such as divorce that cannot be tested experimentally.
However, longitudinal studies are both expensive and difficult to do. This attrition of participants is not random, so samples can become less representative with successive assessments. To account for this, a researcher can compare the respondents who left the survey to those that did not, to see if they are statistically different populations.
Respondents may also try to be self-consistent in spite of changes to survey answers. Questionnaires are the most commonly used tool in survey research.
However, the results of a particular survey are worthless if the questionnaire is written inadequately. A variable category that is often measured in survey research are demographic variables, which are used to depict the characteristics of the people surveyed in the sample.
Reliable measures of self-report are defined by their consistency. It is important to note that there is evidence to suggest that self-report measures tend to be less accurate and reliable than alternative methods of assessing data e. Six steps can be employed to construct a questionnaire that will produce reliable and valid results. The way that a question is phrased can have a large impact on how a research participant will answer the question.
A respondent's answer to an open-ended question can be coded into a response scale afterwards,  or analysed using more qualitative methods. Survey researchers should carefully construct the order of questions in a questionnaire. The following ways have been recommended for reducing nonresponse  in telephone and face-to-face surveys: Brevity is also often cited as increasing response rate. A literature review found mixed evidence to support this claim for both written and verbal surveys, concluding that other factors may often be more important.
Survey methodologists have devoted much effort to determining the extent to which interviewee responses are affected by physical characteristics of the interviewer. Main interviewer traits that have been demonstrated to influence survey responses are race,  gender,  and relative body weight BMI. Hence, race of interviewer has been shown to affect responses to measures regarding racial attitudes,  interviewer sex responses to questions involving gender issues,  and interviewer BMI answers to eating and dieting-related questions.
The explanation typically provided for interviewer effects is social desirability bias: Interviewer effects are one example survey response effects. From Wikipedia, the free encyclopedia. For the Statistics Canada publication, see Survey Methodology.
Research methods in psychology 9th ed. Hand , Advising on Research Methods: An example of a double barreled question is, "Please rate how strongly you agree or disagree with the following statement: Surveys can be admininistered in three ways: Through the mail Advantage: Low response rate By telephone Advantages: Higher response rates; responses can be gathered more quickly Disadvantage: More expensive than mail surveys Face-to-face Advantages: Highest response rates; better suited to collecting complex information Disadvantage: Very expensive Visit the following website for more information about survey administration: What is a Survey?
Glossary terms related to survey administration: Four sampling techniques are described here: Simple Random Sampling Simple random sampling is the most basic form of sampling Every member of the population has an equal chance of being selected This sampling process is similar to a lottery: In this procedure, telephone numbers are generated by a computer at random and called to identify individuals to participate in the survey Cluster Sampling Cluster sampling is generally used when it is geographically impossible to undertake a simple random sample Cluster sampling requires that adjustments be made in statistical analyses For example, in a face-to-face interview, it is difficult and expensive to survey households across the nation.
Stratified Sampling Stratified samples are used when a researcher wants to ensure that there are enough respondents with certain characteristics in the sample The researcher first identifies the people in the population who have the desired characteristics, then randomly selects a sample of them Stratified sampling requires that adjustments be made in statistical analyses For example, a researcher may want to compare survey responses of African-Americans and Caucasians.
Nonrandom Sampling Common nonrandom sampling techniques include convenience sampling and snowball sampling Nonrandom samples cannot be generalized to the population of interest. Consequently, it is problematic to make inferences about the population In survey research, random, cluster, or stratified samples are preferable Visit the following websites for more information about sampling procedures: Systematic Error Systematic error is more serious than random error Occurs when the survey responses are systematically different from the target population responses For example, if a researcher only surveyed individuals who answered their phone between 9 and 5, Monday through Friday, the survey results would be biased toward individuals who are unemployed Sources of bias include Nonobservational error -- Individuals in the target population are systematically excluded from the sample, such as in the example above Observational error -- When respondents systematically answer surveys question incorrectly.
For example, surveys that ask respondents how much they weigh will probably underestimate the population's weight because respondents are likely to underreport their weight Random Error Random error is an expected part of survey research, and statistical techniques are designed to account for this sort of measurement error Occurs because of natural and uncontrollable variations in the survey process, i.
Visit the following website for more information about measurement error: Reducing Measurement Error Glossary terms related to measurement error:
Questionnaires can include the following types of questions: Open question filefreevd.tk questions differ from other types of questions used in questionnaires in a way that open questions may produce unexpected results, which can make the research more original and valuable.
Survey research is a commonly used method of collecting information about a population of interest. There are many different types of surveys, several ways to administer them, and many methods .
A questionnaire is a research instrument consisting of a series of questions for the purpose of gathering information from respondents. Questionnaires can be thought of as a kind of written filefreevd.tk can be carried out face to face, by telephone, computer or post. distinguish the survey tool from the survey research that it is designed to support. Survey Strengths Surveys are capable of obtaining information from large samples of the population.
In this article, we will take a look at a sample questionnaire about "Customer Satisfaction on QRZ Family Restaurant", and briefly discuss each section from the introduction to the end of the survey. A field of applied statistics of human research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology includes instruments or.