Change in questions and in methods can improve insights and clients’ budget effectiveness

Change in tracking studies can improve insights and clients’ budget effectiveness, as a recent experience when taking over a long-running tracking survey about repair service response speeds, showed.

A new client had us continue a long-running telephone satisfaction tracking survey.

We added a separate mobile sample and also an online sample, to see if these additional survey methods would add audience coverage or be less expensive replacement methods, while generating results that were consistent with past findings.

The mobile survey questionnaire replicated the existing landline survey, as did the online survey, though with the addition of “Can’t say” options to questions, where needed.  (Had we written the original landline telephone survey, we would have included “Can’t say” options in that as well.)

This year, we repeated the previous years’ interview practice of having the telephone interviewers repeat the question unchanged if a landline or mobile phone participant answered initially that they couldn’t answer a question.

When the three surveys were complete, we compared the answers from each method.

For all but one question, the answers were the same.  This matching of answer proportions suggested the less expensive, more inclusive online survey approach could be used in future, with significant budget savings for the client.

But there was one sticking point – the question asking whether the speed with which the client responded to emergency service requests was appropriate.

With each survey method, 4% of people said the standard response time was not appropriate (not fast enough).  Conversely, 94% of those who answered in the landline and the mobile surveys said the standard response time was appropriate

Only 58% of the online sample said the standard response time was appropriate.  36% said they “Can’t say” whether the standard time was appropriate.

The difference between the online and the landline/mobile telephone survey findings suggests that repeating the question unchanged, if there is no “Can’t say” option, forces an answer where no answer may be known.  Whether forcing an expressly positive or expressly negative answer is appropriate if the participant answers that they “Can’t say” is a question we should consider.

The practice of repeating the question if a “Can’t say” answer is first given in telephone surveys may lead to perhaps unwarranted assumptions of service satisfaction.

The more accurate measure (which in this case, would include a “Can’t say” answer) should be used, we suggest.  Used because knowing the proportions of customers which really had a view on the appropriateness of the repair speed standards, would enable more effective communication of what service speeds are appropriate.  Without this more precise measure, such information is less likely to be included in the client’s communications campaigns, and so those could be less effective than they should be.

These findings show the impact of minor changes in this tracking survey (adding a “Can’t say” answer option), and show the benefits of trialling such changes before changing the tracking survey questionnaire or survey method.

The findings also show, now the “Can’t say” issue has been identified, that there are marked budget savings available, should the client chose to change the survey method to the less expensive online survey method for their future tracking surveys.

We assume that we all now include “Can’t say” options in our telephone, mobile and online surveys, so this specific finding may be more of historic interest.  But our view is that we, as professional researchers, should review our techniques and our questionnaires regularly for improvements; and then see if the improvements will benefit our clients’ understanding and their budget efficiency.

Of course, one study, just as one swallow, may not herald a new spring, and these results may be an outlier.

We’d be interested to see whether these findings are replicated in other studies and look forward to our AMSRS colleagues’ advices.

 

Philip DERHAM, DIRECTOR, Derham Marketing Research Pty. Ltd.

mm
About Supersearcher IRG Blog 23 Articles
Supersearcher IRG Blog is an initiative of AMSRS Independent Researchers Group members. In this column you'll be able to find articles related to business of independent market researchers who are working as consultants or freelancers.

1 Comment on Change in questions and in methods can improve insights and clients’ budget effectiveness

  1. You might be interested in some work by Krosnick and colleagues that concluded that “Don’t know” options should NOT be used in any survey. His studies pushed respondents who gave non-committal answers to say which of the pre-codes they would choose if they had to. He found that the distribution of responses obtained this way did not differ from the distribution from those who gave an opinion the first time, and that correlations with other measures were the same.
    I published a paper in the AJMSRS 9when it was still running in 2011) comparing parallel surveys online and by CATI, with some items online having no DK option, but allowing respondents to not answer. If they did not a new version of the item popped up including a DK option. However, because online panel members have learned that they have to give an answer or be asked the question again, on these items hardly anyone online replied DK, well below (the still low) rate for those interviewed by CATI. However, on items where a DK option was read out in CATI or on screen online, DK was much higher online. One way to deal with this might be to say up front in both CATI and online that respondents can say they don’t know how to answer but will then be asked to explain in their own words what stops them from being able to give an opinion. This will equalise the effort in giving DK responses across modes. I haven’t had the chance to try this yet.

Leave a Reply

Your email address will not be published.


*