Research News Live

ALCHEMY! Turning data into insight

Every day there are 2.5 quintillion bytes of data created and the pace is accelerating with the growth of the Internet of Things (IoT). Bernard Marr, writing in Forbes, May 2018, quoted some mind-blowing statistics. Over the last two years alone, 90 percent of the data in the world was generated.

As a market research industry, both creating and mining data, what does this mean? Let’s take a step back.

In medieval times people sought a universal elixir. They failed, just as modern insighters fail if we proselytise one approach is ‘the future’. We search for truth through data – but history tells us there is no universal solution.

Rather, we need to look beyond simply the data to harness the skills of researchers, tech experts, insight entrepreneurs and intrapreneurs to unlock new truths and create a stronger ecosystem – where each collaborates to create a whole that is greater than the sum of its parts.

Marketers are awash with data. Which data sources should we use? Traditional metrics seem like old, blunt tools. But what should we put in their place? How do we deal with the increasingly systemic limitations of survey work and integrate new tech methods whilst recognising their synergies and limitations?

Then there are thorny ethical issues. How do we navigate the right path, choosing the one we should take for the good of all, rather than one we can take for personal gain? We need to respect privacy and personal data at the same time as using it for the benefit of humankind. We have the Cambridge Analytica story to warn us.

Who will be the “modern insighter”? Do they have a living breathing heart or artificial intelligence processing trillions of data points to get further, faster than any human being? Will clients be consulting their friendly insight director at a research agency more, or less frequently, than Watson (or its equivalent)?

Will the robots know us better than our family? There was the incredible story about how Target exposed a teenage girl’s pregnancy through its data algorithm. Her father complained to the store manager about the mailing she had received full of maternity ads. When the manager called the father a few days later to apologise again, the father admitted that he had just found out his 16-year-old daughter was due in August.

And what about Alexa? Will she spill our secrets? What will the machines and data teach us? We have lots of lessons to learn from
outside our immediate industry.

The Learning Clinic
When a company called The Learning Clinic replaced clipboards at the end of the bed to record patients’ observations in hospital with a digital solution, the benefits were much greater than they imagined.

The data from millions of patient observations meant that they could predict who might be about to become ill with norovirus. It enabled them to know which wards needed more staff (where patients needed more regular observations or were being admitted or discharged) and which could safely reduce their staff.

Here are some of the impressive stats that using this data facilitated*:
• Croydon University Hospital reduced cardiac arrests by 70%
• Portsmouth Hospitals NHS Trust reduced mortality by 15% and Norovirus
outbreak by 95%
• Northampton General hospital reduced patient observation errors by 300%
• University Hospitals Coventry and Warwickshire NHS Trust reduced length of stay by 10% and ICU admission by 50%
• Dorset County Hospital reduced emergency admissions’ length of stay by 20%

What can we learn from Animal Intelligence?
Clever Hans was an Orlov Trotter horse that was claimed to have performed arithmetic and other intellectual tasks. When his trainer asked him to answer complex questions and tap his hoof to give the answer, he succeeded in doing this time and again. So amazing was Clever Hans that there was a formal investigation in 1907. Psychologist Oskar Pfungst demonstrated that the horse was not actually performing these mental tasks but was watching the reactions of his trainer. Clever Hans only got the answer right when the person asking the question knew the answer. He would tap slowly and the questioner’s anticipation would raise until Clever Hans tapped the final time, at which point the questioner would relax and Hans would stop.

Imagine how algorithms will be able to read body language. And the truths these will reveal.

The need for human centred data sources
Just because we have a massive amount of data, doesn’t mean that it is all useful.

Having data sources that look through the eyes of the customer to see how they encounter brands in the real world – and real world is different to being forced to view an ad on a computer for research analysis, is crucial to help marketers optimise their marketing and media budgets. Better still is if these data sources are longitudinal as the nature of the data means we can pick out what brand encounters lead to other encounters. MESH Experience set out to create one such source, Real-time Experience Tracking (RET). Using this data source, recently, we were investigating online purchases for a client and on many occasions, people were buying brands they had never mentioned they were considering. In one instance the online purchase was prompted by seeing the brand in store six days earlier.

Using one data source to illuminate other data sources
We have always combined this Real-time Experience Tracking (RET) data with other sources, such as media spend to look at cost per experience to ascertain media effectiveness. However, only recently are we seeing, firstly how this data source can illuminate many others in an ecosystem and, secondly how Artificial Intelligence (AI) can unlock insights never seen previously.

When you look at a data source it becomes the focus, which is both a good and a bad thing. For years, because we focused on measuring the effectiveness of TV advertising, we assumed that this was the key marketing lever for success. And, often, that our brand metrics, like consideration, were driven by advertising. We know now, that other paid, owned and earned brand encounters can have a more profound impact. A personal interaction with branch staff or trying a new drink in an on-premise environment can be the key to changing behaviour and attitudes. Sometimes it can be tempting to look at social media data, like Twitter, to assume this is how everyone is experiencing the brand.

But perhaps social media only accounts for 5% of people’s experiences with your brand and encountering the brand in real life accounts for 25%. Figure 1 illustrates how people were encountering Heineken and its competitors. TV was important, but so was In Store, On Premise and At Home.

By taking the Real-time Experience Tracking data, which looks through the customers’ eyes, we can put all our other marketing data streams into context. We can see that TV experiences account for about a quarter of experiences and paid TV slightly less than this (with earned accounting for some too – say 5%). Within paid, there is the manufacturer ad (e.g. Heineken) and the retailer ad featuring the brand (e.g. Coles – say 5% of experiences). What’s more, we might know (and we have seen this across studies) that seeing the brand featured in a retailer ad can have a significant impact on brand consideration. From the quarter of TV experiences, about 15% are for the manufacturer’s TV ad. This means that we can look at the detailed data, such as GRPs and ad recall, in this context.

Using Artificial Intelligence (AI) to unlock new insight within data sources
We have thousands of comments that relate to each experience in any MESH study. Whilst we have always used these for qualitative analysis, we can now use AI to uncover new insight. Working with Signoi, we discovered that the topics created by AI from the comments have very different levels of engagement (see Figure 2). We can analyse all touchpoints, from paid advertising to conversations with friends, or pick out only Brand A’s TV advertising to see the topics that resonate most strongly. In the example below, we know that we should use messaging talking about topic 4.

The fact that we already have participants’ own evaluations of the experiences is helpful to quickly see which topics are most positive and persuasive. This evaluation can also help to validate the coding by the AI. Negative experiences, as coded by our participants, come out as Sad, Fear, Disgust, etc. through Signoi.

We can also look at the AI topics by touchpoint to see if some of the most engaging topics are being communicated through certain touchpoints. For example, does topic 4 mainly consist of TV advertising comments or conversation comments or retail display comments? And by applying logistic regression we can see which topics are changing people’s perceptions, because we know a participant’s brand consideration before and after they reported their experiences. AI can completely refresh an existing data set.

A new data source and its place in the ecosystem
The second example relates to a new data source. Many of our clients are spending $billions on retail display. They know how much they spend and what sales they generate, but don’t have interim measures, such as how many people come into contact with the display and how engaged they were with it. Working with Emotion Research Lab we have created a solution to address this. We put cameras into the display. These do not video people which avoids privacy issues as no personal data is captured. Instead the camera and algorithm counts number of faces, time these faces look at the display and the emotion they feel. We can now pick up the interim data of how many people were reached and with what reaction.

We are also combining the camera ethnography data with other data sources, such as sales. For one client we saw a very strong correlation between sales and the numbers of people viewing the display.

However, the data doesn’t answer everything! So, we are combining observation and exit interviews with the camera ethnography. Someone may have stopped at the display and purchased the product, but how has this changed their perception of the brand? Survey data via exit interviews can help to answer this.

We are right at the beginning of this exciting journey, using algorithms, and they are being continually improved. We are making assumptions that may not be correct and will certainly be refined over time. There has been much research to suggest that positive emotions lead to positive brand outcomes, so we look for advertising to create a positive/happy impression. The algorithm we use picks up emotion and we count the number of people who exhibit a happy emotion to report on.

However, over time, I am sure that we will uncover the optimum emotional signature – the sequence and combination of emotions that lead to a sale or the change in perception of a brand. Remember Clever Hans! He could detect that just before he had tapped his hoof to the correct mathematical answer, the questioner would become tense. When Hans tapped his last tap to signify the correct answer, the questioner relaxed. We may find that the ideal emotional signature is surprise before happiness, for example.

In summary
Let’s remember the following when it comes to turning data into insight:
• Speed – not only are things changing exponentially, if we harness data through tech solutions, we should be able to generate insight incredibly fast.
• Use new data sources – go beyond the traditional survey or focus group, but look for data that really adds value.
• Learn from other data sources and industries – these can inspire us to create more valuable data sources to answer marketers’ business questions.
• Blend and fuse – our skill as alchemists will help us to get closer to the truth.

In many ways artificial intelligence is surpassing human beings – particularly when it comes to speed. A doctor may only be able to see thousands of patients with a particular condition, whereas a computer can analyse trillions of pieces of data. Our technical partners, Emotion Research Lab, work in the medical and mobility fields. And the research industry has much to learn from the innovations happening in other fields. Our data sources are changing.

We have been blending research methodologies for some time now. However, the data blending and fusion that we will see in the next five years will make what we have done in the last five years seem like child’s play.

Author: Fiona Blades, President and Chief Experience Officer, MESH Experience

*http://tlc.client-projects.net/results/

This article also appears in the May-July 2019 edition of AMSRS publication, Research News – State of play. Check out the rest of the articles in this edition.

Photo by Rohan Makhecha on Unsplash

mm
About The Research Society 567 Articles
The Research Society is the peak body for research, insights and analytics professionals in Australia. It has a diverse membership of individuals at all levels of experience and seniority within agencies, consultancies, client-side organisations, the non-profit and government sectors, support services as well as institutions and the academic community. As well as over 2,000 individual members, the Research Society has 80 company and client partners, with the number continuing to grow. The Research Society research professionals and company partners commit to and are regulated by the Research Society Code of Professional Behaviour.