It’s important to consider ‘relevant alienation factors’ when deciding how much time a survey should be in field, argues Matt Balogh.
Thanks to dashboards, it seems that almost every stage of a market research project can be accelerated – from the programming through to the reporting – and without detriment to the outcome.
But the exception proves the rule: the more time made available to respondents to participate, the more representative the sample, and the more accurate are the results. It should be noted that does not apply to passive data collection, only self-reported surveys. And it should also be noted that, for a significant amount of commercial research, the difference is so minimal that it doesn’t matter. But the more sensitive the research – in other words, when the research deals with factors that reflect subsets of the community and how likely they are to respond to a survey – the more time matters. It matters even more when these are also factors in the topic of the research.
For example, in the case of a post-hospital-stay patient satisfaction survey, the reason the patient was in hospital has a significant impact. Clearly, if a patient were admitted for a very serious ailment, this would influence both their likelihood to be able to respond to the survey and, critically, the nature of their responses. This might seem like an extreme example, but largely applies to all social research to some extent. For example, culturally and linguistically diverse communities may be less likely to respond to mainstream surveys, and are also likely to be of special interest in social research. The same goes for people with disabilities, people in rural areas . . . and the list goes on. Let’s call these ‘relevant alienation factors’.
Other considerations also come into play, particularly survey length and the effort required to answer questions. There are also many factors that influence the accuracy, which are unrelated to the period the survey is in field, such as the source of the sample.
When time matters
Let’s consider two examples from extreme ends of the spectrum:
Firstly, consider a short survey about grocery product packaging – perhaps the design of labels on jars of coffee – sent to a non-probability sample frame such as an online panel. The burden is low, it is likely to be conducted in a matter of a few days and there are no apparent ‘relevant alienation factors’. The survey may be sent to a large panel and have a low response rate from that group – but the response rate doesn’t really matter, bearing in mind that the sample frame is an opt-in group anyway, and there is no reason to think that non-respondents may be harbouring some significantly different views. Technically, the accuracy is low, but perfectly sufficient for the need. The survey could probably be replicated amongst the total population – with more time and a rigorous limited-sample call design to increase response rates – with similar results.
The example above could be applied to a large proportion of commercial research. Some of the elements would vary in a linear fashion:
- More questions would increase the burden
- More time and/or reminder emails or text messages might bring up the response rate
- The accuracy would fluctuate moderately and so on . . .
But none of these would be significant enough to matter.
Secondly, let’s revisit the patient satisfaction research mentioned above. All of the factors mentioned above come in to play more acutely. A patient who was in hospital for a simple and successful procedure may be recovering well – with nothing but a bandage and a few stitches to remind them of their ordeal. They are available to respond to surveys, even long surveys. The response rate amongst this end of the spectrum is likely to be good.
However, an elderly patient who has undergone a more serious procedure, such as a hip replacement, would be significantly less disposed to participate, in several ways. The nature of their condition makes them more difficult to reach because they are less mobile, their age and condition makes them less willing to participate in longer surveys and their media devices (whether communication is via emails, text messages or phone calls) are more likely to be intercepted by family or carers, who may suggest that the respondent is not be fit to participate at the time or perhaps not at all.
By comparing the two hypothetical cases in this example, we can see a strong inverse linear connection between time and accuracy – a survey conducted over only a few days would reflect the experience of the least important (minor procedure) patients, and systematically exclude the most important respondents – the elderly, those that had more significant procedures, people with communication challenges and so on. At the extreme end, patients with the worst outcome – death – would be very unlikely to respond – although there are some exceptions to that*.
In fact, the difference between the results from a survey only in field for a few days with a ‘good’ response rate and the same survey, conducted over a much longer period of time, with a very, very high response rate, would be quite significant.
The point is, as long as respondents need to actively participate, there is always a trade-off between time and accuracy. It is not a binary yes/no influence. In the case of most commercial research, this trade-off is minor, and the research can be conducted extremely quickly, with no significant loss of accuracy given the questions being asked. The more the subject of the research impacts on how easily respondents can participate, the more time made available to respond really affects the accuracy.
Author: Matt Balogh, Second Set of Eyes, AMSRS Fellow
*Join the discussion (and discover how a dead person can be a legitimate respondent) on LinkedIn
This article also appears in the February-April 2019 edition of AMSRS publication, Research News – Speed vs Accuracy