On a Scale of 1 to 10, the Right Answer is 10

The last time I went on a short cruise the service was very good, and you couldn’t help but notice how hard the staff was working. But – just in case you didn’t – they spent a lot of time teaching us how to answer the company’s customer satisfaction “survey.”

They explained that the results impact their compensation and future. The question topics and rating scale were shared in detail (a one-to-ten scale with ten being best). Although the questionnaire said “ten for best” and “one for the worst” service, they explained, really the way it works is that ten is best and nine is “worst,” i.e. nothing but ten counts.

This situation isn’t limited to the travel sector, of course. If you’ve bought a car lately, you may have been coached through a customer satisfaction “survey” by the salesperson or invited to fill out a “survey” if you were happy with the service. One such “survey” I saw recently said “If for any reason you feel you cannot answer “Very Satisfied” and “Definitely Recommend,” then please call our Manager… so that we may resolve your concerns immediately.”

Another said “How are we doing?… An 8 is great but, 9 or 10 means you’ll visit us again!!”

It’s not surprising that staff might take matters into their own hands and be proactive when they believe these scores are critically important to their jobs and compensation. Many of these types of “surveys” may be used within the sponsoring companies to drive service improvements, so if that is the case the open-ended follow up questions are  of more interest to them than the numerical scores. These companies may have a separate research program to give a true measure of performance on customer satisfaction.

The practice of coaching may or may not be a problem for the sponsoring companies. But we in the market research industry might need to be worried about the effects of their “survey” execution on our ability to accurately measure opinions. After all, what type of message does this send to research participants in general? We tend to be very impatient with people who don’t perform to our expectations in online surveys, but when we coach someone on how to answer a survey, do we breed cynicism among the public about the whole process? Why should people answer our surveys honestly and thoughtfully when they’ve experienced for themselves how much coaching goes on and therefore how unreliable survey results must be?

The people I was travelling with weren’t bothered by this; they were pretty cynical about research in general. “Don’t people come to you with a research project and tell you what answers they want you to deliver?” I was asked. Well no, they don’t; and if they did, we wouldn’t oblige. But perhaps it’s not surprising that this could be a common assumption, given some of the “surveys” people are exposed to.

Should there be some industry standard to address this? Should surveys where coaching is part of the process be labelled as such?“Employees had the opportunity to discuss this survey with respondents before they took it” for example? Or, do we just distance ourselves from this, viewing these “surveys” as a completely different type of research from the “real” research we do, and not buying the argument that it calls into question every type of research in the public’s mind?

Yes, the service was great on my trip – but this traveler was a non-responder when asked to rate it on a scale of one to ten.

By Jackie Lorch, Vice President, Global Knowledge Management
Originally sourced from SSI Blog

About AMSRS 407 Articles
The Australian Market & Social Research Society Limited (AMSRS) is a not-for-profit professional membership body of over 2,000 market and social research professionals who are dedicated to increasing the standard and understanding of market and social research in Australia. The Society assists members to develop their careers by heightening professional standards and ethics in the fields of market and social research.

1 Comment on On a Scale of 1 to 10, the Right Answer is 10

  1. I agree with this completely – there is a growing trend to coaching respondents on how to respond, and I think this is a dangerous approach for companies to take. It naturally just makes customers cynical (about the company itself and research in general). It can also backfire spectacularly – I strongly resent being told by a customer service rep what score I should give them, so unless their assistance has been truly exceptional it usually results in my giving them a lower score than I would have otherwise because I see that as bad customer service. It is also self-defeating for companies to set such high benchmarks for surveys when they know the staff can influence the respondent. Such surveys are of use for PR only, and even then when people become aware of how these excellent results are achieved they just become hollow PR. Companies that take this approach are doing themselves a disservice.

Leave a Reply

Your email address will not be published.