02 Dec Evaluate the survey provided to measure the attitude of hospital e
Hide Folder InformationTurnitin™
This assignment will be submitted to Turnitin™.Instructions
Evaluate the survey provided to measure the attitude of hospital employees regarding patient safety. The survey can be found as part of Case 15.1 in the course textbook (Zikmund et al., 2013).
More information about the survey, its constructs (see Survey Items and Composite Measures), and survey guidance is available in the AHRQ (Agency for Healthcare Research and Quality) website. This survey is the Hospital Survey on Patient Safety Culture.
For your evaluation be sure to consider:
- the information that is being sought,
- the content and words of individual questions,
- the response forms to the questions,
- the level of measurement,
- and question sequence.
Length: Your paper should be between 5-7 pages, not including title and reference page.
References: Include a minimum of five (5) scholarly sources.
Week 6 – Assignment: Evaluate a Questionnaire
Instructions
Evaluate the survey provided to measure the attitude of hospital employees regarding patient safety. The survey can be found as part of Case 15.1 in the course textbook (Zikmund et al., 2013).
More information about the survey, its constructs (see Survey Items and Composite Measures), and survey guidance is available in the AHRQ (Agency for Healthcare Research and Quality) website. This survey is the Hospital Survey on Patient Safety Culture.
For your evaluation be sure to consider:
the information that is being sought,
the content and words of individual questions,
the response forms to the questions,
the level of measurement,
and question sequence.
Length: Your paper should be between 5-7 pages, not including title and reference page.
References: Include a minimum of five (5) scholarly sources.
References
Ten top tips for designing a questionnaire [Video file]. (2017). Retrieved from SAGE Research Methods.
Thwaites Bee, D., & Murdoch-Eaton, D. (2016). Questionnaire design: the good, the bad and the pitfalls. Archives Of Disease In Childhood.Education And Practice Edition, 101(4), 210–212.
Zikmund, W., Babin, B. J., Carr, J., & Griffin, M. (2013). Business research methods (9th ed.). Mason, OH: Cengage Learning.
,
SAGE Research Methods Video
Ten Top Tips for Designing a Questionnaire
Pub. Date: 2016
Product: SAGE Research Methods Video
DOI: https://dx.doi.org/10.4135/9781473997592
Methods: Questionnaire design, Survey research
Keywords: practices, strategies, and tools
Disciplines: Anthropology, Business and Management, Criminology and Criminal Justice, Communication
and Media Studies, Counseling and Psychotherapy, Economics, Education, Geography, Health, History,
Marketing, Nursing, Political Science and International Relations, Psychology, Social Policy and Public Policy,
Social Work, Sociology, Science, Technology, Computer Science, Engineering, Medicine
Access Date: November 30, 2022
Publishing Company: SAGE Publications Ltd
City: London
Online ISBN: 9781473997592
© 2016 SAGE Publications Ltd All Rights Reserved.
[MUSIC PLAYING] [Ten Top Tips for Designing a Questionnaire] A questionnaire is a set of questions
used in survey research to collect information from people about their opinions, attitudes, beliefs,
and behavior. Questionnaires let you collect data
in a standardized way, which can be quantified and analyzed statistically. But how do you ensure
your survey is measuring what you want to measure? [Tip #1: Stay focused on the aims of your re-
search] When you're planning your survey, ask yourself, what are the aims of my research? List out
the things that you're trying to find out, and then break each topic down and think about how to con-
struct a survey question which will measure the underlying concept. [Tip #2: See what's already out
there]
Start by doing some research to see if similar questions have been asked in other established sur-
veys, like the General Social Survey. Using tried and tested questions can help ensure the reliability
and validity of your measures. Don't feel you need to reinvent the wheel, but do exercise caution.
The internet is full of poorly designed questionnaires, so make sure you're drawing from a reliable
source.
Tip #3: Think about your mode] More and more surveys are administered online. But it's also possible
to do surveys over the phone, face-to-face, or with old-fashioned pencil and paper. Your question-
naire should be tailored to the mode you choose. For online or mail surveys, remember to explain
the purpose of the survey to respondents in a covering mail or an introduction to the survey.
If you're using more than one interviewer to administer your survey face-to-face or by phone, think
carefully about how you'll train your interviewers to ensure consistency. [Tip #4: Keep it short] Re-
spondents are more likely to complete a questionnaire if you keep it short, especially if it's online. [Tip
#5: Think carefully about your questions] Design the survey questions with your specific audience in
mind, wording them carefully and clearly.
It is important that the respondent knows exactly what you are asking them and that questions are
not open to multiple interpretations. Don't ask leading questions, and avoid long and complex ques-
tions which could confuse respondents. [Tip #6: Question order matters] Make initial questions easy
for respondents to answer. If you're planning to ask personal or sensitive questions, think about ask-
ing these later.
And remember to use branching or skip logic to avoid asking people unnecessary questions. [Tip #7:
Make recall easy] If you're asking your respondents how often or how recently they have done some-
thing, be sure to be specific about the time frame. But think carefully about how able the respondent
will be to answer the question. Can you remember what you did a year ago? [Tip #8: Don't forget
SAGE
(c) SAGE Publications Ltd., 2017
SAGE Research Methods Video
Page 2 of 3 Ten Top Tips for Designing a Questionnaire
demographic questions] For some research questions, it's
important to find out things about who your respondents are– for example, where they live or how
old they are, as well as contact information, in some cases. Don't forget to ask these questions if you
hope to analyze your data in relation to demographics. [Tip #9: Think about analysis] Try and think of
the bigger picture and how you're going to analyze the data you collect. If you find you have lots of
open-ended questions
because you're finding it hard to think of the answers people might give, it's possible that a survey
is not the best method to use. You might need to conduct some qualitative research first in order
to develop a set of survey questions and answer options. [Tip #10: Test, test, test] After testing the
survey yourself, make sure you test it with your colleagues– or even better, a small sample of the
respondents you hope to recruit.
This will help highlight any issues or confusing questions, which you can then revise before sending
it out more widely. [Good luck with your research!] [MUSIC PLAYING]
https://dx.doi.org/10.4135/9781473997592
SAGE
(c) SAGE Publications Ltd., 2017
SAGE Research Methods Video
Page 3 of 3 Ten Top Tips for Designing a Questionnaire
- SAGE Research Methods Video
- Ten Top Tips for Designing a Questionnaire
,
Writing Survey Questions
Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.
Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.
Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.
For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.
Question development
There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.
At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as focus groups , cognitive interviews, pretesting (often using an online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.
Measuring change over time
Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.
When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see question wording and question order for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.
The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.
Open- and closed-ended questions
One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.
For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.
When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see “High Marks for the Campaign, a High Bar for Obama” for more information.)
Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.
When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.
In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.
In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).
Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.
Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.
Question wording
The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.
[View more Methods 101 Videos ]
An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule even if it meant that U.S. forces might suffer thousands of casualties,” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.
There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:
First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions. Based on that research, the Center generally avoids using select-all-that-apply questions.
It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.
In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose not allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.
Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”
We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two forms of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.
One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.
One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).
Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.
Question order
Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions tha
Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.
Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.
About Wridemy
We are a professional paper writing website. If you have searched a question and bumped into our website just know you are in the right place to get help in your coursework. We offer HIGH QUALITY & PLAGIARISM FREE Papers.
How It Works
To make an Order you only need to click on “Order Now” and we will direct you to our Order Page. Fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.
Are there Discounts?
All new clients are eligible for 20% off in their first Order. Our payment method is safe and secure.
