How to Design a Questionnaire for Research: A Latimer Appleby Useful Guide
There are, as you can imagine, entire books devoted to the subject of questionnaire design. This guide, based on our experience at Latimer Appleby, provides advice on some of the key issues involved in creating effective quantitative research surveys.
Start with the brief
Everything is a clear continuation from the brief, get the brief right and the rest will follow.
Go from topics to questions
Work from the general to the specific; in the case of a questionnaire go from general topics to specific question construction. Get the topics right to start with, and the questions will follow naturally.
Follow the logic
When it comes to building the questions themselves, our recommendation is that you follow a logical flow. That means moving from one topic to the next and hence from one question to the next. If you think about the survey or questionnaire more as a conversation between two parties it will be easier to follow a natural and logical flow. You wouldn’t jump randomly from topic to topic in a conversation, so our rule is don’t do it in survey design.
Choose your words carefully
Always use clear, straightforward and unambiguous language. Remember your audience; put yourself in their place and make it as easy as possible for them to understand the questions you are asking and the type of response you are seeking from them. In a lot of the survey work we undertake at Latimer Appleby, we have trained interviewers on hand to administer the questionnaire. However this is not always case, especially with self-completion surveys e.g. online. If the questions are confusing and difficult to understand, the respondent will quickly give up and either abandon the task, or even worse, just put down anything.
Pre-coding the responses
It is important to plan the questionnaire design, which means thinking through the possible answer options. If the subject matter is new to you, you may need to conduct supplementary research. For example qualitative research in the form of depth interviews or focus groups with your target audience. This will help you develop a series of potential responses or answers to each question. It will also reduce the need for open ended questions. The open ended question has its place but it is sometimes seen as the refuge of a lazy researcher. As such it often requires more effort from the respondent. Furthermore, post survey analysis is clearly more difficult to quantify.
Use of the ‘other’ option
Even where you have researched all the options there may well be something you have overlooked. For example, your initial planning may have revealed that asking a question about a favourite colour will reveal that in 95% of cases the answers are red, blue, green or yellow. In questions such as this, we would normally add the additional option of ‘other’, which would then be completed as an open ended question (or open ender, to use the jargon). In this example, the final 5% may throw up some interesting colour ideas that you had not considered.
Single versus multiple responses
Whilst you may think that a lot of questions you ask will generate a single response, you may find that there is a greater tendency to multiple responses than you imagined. Returning to the colour choice question, although you may want to determine the proportion of those choosing green as their favourite, in actual fact a sizeable proportion of your audience may have more than one favourite colour. Therefore forcing them to give a single answer, whilst easier for you, may lead your respondents to either give an incorrect reply or tempt them to skip the question completely.
Avoid leading your audience
One of the risks of questionnaire design is that of producing leading questions, or ones that prompt the respondent to answer in a particular way. For this reason you should carefully review each question before the survey goes live to ensure that you are not creating any sampling bias in your questions.
Where relevant Latimer Appleby often try to include scaled responses. This means replacing simple ‘yes’ (e.g. agree) or ‘no’ (e.g. disagree) response options with, for example, a 5-point scale from ‘agree strongly’ through to ‘disagree strongly’ An odd numbered scale, such as 5 points or 7 points, also allows for a neutral midpoint, although for some questions you may wish to avoid allowing respondents to be fence-sitters. Ultimately, we are looking to determine just how strongly an opinion is held rather than forcing a straight positive or negative response.
There is some debate about the placement of sensitive questions. There is, in our opinion, no right or wrong answer. Some argue that sensitive questions might best be left to the end of the survey. This is partly to allow the respondent to complete the bulk of the survey and hence build up a rapport with the survey approach. Others take the view that sensitive questions should be delivered ‘head on’ at the start of the survey. Latimer Appleby have taken both approaches, however, the general trend today seems to be to place the sensitive questions at the beginning of the questionnaire rather than at the end.
Again there is no right or wrong answer as to where to place these types of questions. The basic rule, however, is that if routing the questionnaire depends on the response to a key classification questions, such as gender or life stage, then these questions should be placed early in the survey. Otherwise the convention seems to be to place these at the end of the survey.
An iterative process
Remember that much of good survey design is arrived at through an iterative process, and through trial and error. At the same time, where possible, it is good practice to test or pilot your survey. Ideally you should pilot your survey with members of your sample universe. Of course the research process does not end with the survey construction. The real value is in the interpretation and, for that reason, it is vital that those conducting the analysis and reporting have been part of the questionnaire design process too.