Resources

Why sample will become an ever-increasingly valuable asset with Karine Pepin

This episode of Now that’s significant, a Market Research Podcast sees a familiar face appearing on the podcast in Karine Pepin, Sr. Vice President at 2CV. Amidst the data quality issues the industry is addressing, Karine suggests completely rethinking how we view sample as being a great first step for improvement.

 

So, Karine, what’s the most significant thing you are going to tell us today?

Researchers need to adopt a new approach to survey research by treating sample as a reliable asset and not as a commodity. Changing this mindset is a long road ahead which will have profound implications on the entire ecosystem.

For those who have been living under a rock in recent times, what can you tell us about the problem and how we’re currently dealing with it?

Data quality has been the topic of much discussion in the market research industry for the past few years. An enormous amount of time and resources are devoted to improving data quality throughout the project lifecycle. And while it's hard to put a price tag on these combined efforts across the ecosystem, it would doubtless be staggering.

Why isn’t it enough to deal directly with these issues the way we currently are as an industry?

Despite the quality measures we generally have in place - fraud detection software, in-survey checks, post-survey clean-up - it is a constant game of cat and mouse.

As researchers become more sophisticated in our fraud detection, so fraudsters become more adept at circumventing those measures. Data cleaning (manual or automated) is a band-aid fix rather than a true solution to this data quality crisis, a crisis from which even Al cannot rescue us.

We often hear that there is no "silver bullet solution to the data quality challenge. But what if the solution does not take the form of a new shiny tool to fix what we have, but rather a different approach to conducting survey research?

Ok, so let’s dig into this now. What makes you think that we’ve been treating sample as a commodity up until now?

Over the years, automation and programmatic technologies have led to efficiencies that have commoditized sample and, in turn, commoditized people.

The race to maximize traffic and minimize cost cannot go on forever.

It is becoming painfully clear that this critical part of the research process deserves more attention and respect. Multimillion-dollar decisions hinge on participants' self-reported identities, behaviors, and preferences; in other words, they depend on the quality of sample.

What exactly do you mean in treating sample as a reliable asset?

To truly improve data quality, we need to disrupt the current mindset of treating sample as a commodity. Researchers need reliable sample that they can trust, which begins with faith in the authenticity of the participants. In B2B research, some expert networks have solved the lack-of-faith issue by validating the identity of the research participants.

B2B sample collected from expert networks is highly reliable compared to traditional online panels. In addition to validating the identity of the participants, better targeting and higher incentives encourage people to give more thoughtful answers. As a result, the data quality is exceptional, requiring minimal manual data clean-up.

So, what's the catch? Expert networks cost more and are slower in collecting data compared to traditional online panels, but with good reason.

That sounds great from a B2B perspective, but how do you see this scaling up for consumer research?

With the large sample sizes that are expected in consumer research, the challenge is to replicate the B2B model on a much larger scale. Social media recruitment and mobile panels are examples of approaches that helpfully enable us to authenticate the identity of the participants.

Social media recruiting developed a negative reputation some years ago because of its association with river sampling, which was deemed an inferior alternative to well-managed, double opt-in panels. However, the reality now is different as high-quality, double opt in panels constitute a much smaller portion of the supply.

With appropriate quality measures in place, social media recruiting in which each participant is linked to a profile, provides an added layer of assurance that participants are who they say they are.

Similarly, mobile panels offer the advantage of participants connected to a mobile phone, which enables their behavior to be tracked and provides assurance that they are genuine individuals.

By hyper-targeting based on real-world behavior such as app usage, website visits, and purchases, researchers can be confident that participants are not misrepresenting themselves to qualify for a study.

Let’s take a step back here, what impact do you think would play out on research practices if we as an industry adopted this sample-as -an-asset mindset?

To move beyond our sample-as-commodity mindset, we must significantly change our research practices. As we know, we can have fast, cheap, or good, but not all three at the same time. Better quality means that we'll inevitably have to compromise on cost and speed.

Validated sample sources will likely be more expensive and yield lower feasibility, making each participant a truly valuable asset.

For this approach to be viable, researchers will need to drastically change the way we've been doing survey research. We'll adjust our sample sizes downward ensuring that we get what is statistically needed, rather than opting for larger, cheaper, low-quality samples. We'll be extra careful in designing a shorter, better survey experience to minimize participant dropout. We may also contemplate keeping incomplete data.

Interesting point you made about smaller sample sizes. Made me think of previous podcast with John Gercaci on the issue with polls. Political polls at that. And his argument to help turn around the inaccuracies of US polling was to increase representation by increasing sample size.

Given these are two different types of surveys, you can see, with the differing suggestions to fix issues, why progress has been slow. Does this suggest the need for industry standards?

I listened to the podcast with John and he makes some great points. He’s approaching the issue from a different angle focusing on the 95% who doesn’t even get into the survey and the non-response bias that results from having only this limited 5% take surveys.

He also gives a great reminder that it wasn’t always that way (going back to CATI). He talks about how the technology is hurting us now and I agree with him. It makes sample cheaper and faster, but not better.

At the end he says “imagine a world where every person invited to a survey agreed to participate and give honest answers, we wouldn’t have any data quality problems." I don’t know that we can get everyone to agree to take a survey, but giving honest answers shouldn’t be so hard.

Yes I think we need industry standards, but more importantly we need alternatives so we can make a real choice when we buy sample. There is a need for someone to come in and disrupt the sample industry with an alternative offering that’s not traditional online panels.

Any final words before I close, Karine?

Data cleaning is not a sustainable solution to the data quality crisis we're in. As with the quality of ingredients in a dish, the foundation of good-quality data is in its source.

Of course, we didn't arrive at this crisis overnight; sample has been devalued over the past decade at the expense of cost and speed. Disrupting the sample industry also means disrupting our research mindset to accept smaller sample sizes, more expensive CPIs, more time spent on designing an engaging experience for participants, shorter surveys, and longer fieldwork times. It's a trade-off, and it's worth it.

So, to quickly recap, we talked about…

How market researchers need to rethink how they view their sample, treating it not as a commodity but rather as an asset. By taking this view, market researchers are better able to improve quality of the data going into the analysis process, rather than trying to fix what’s broken once it’s already in play.

The intent is obviously not to dismiss the value and role of data cleaning tactics like fraud detection software, as well as in-survey and post-survey checks, but cleaning up sample before a question has even been asked is far more valuable for all involved.

Karine discussed how being far more intentional in whom we recruit can be a big aspect too, with a potential swing back towards the previously written-off channel in social media.

We talked about how we can start targeting the 95% of people who don’t engage in market research, to try and see how we can reach them in order to improve our representation.

Ultimately, taking time to disrupt our ways of working and improve our overall attitude and approach to sample is worth the hassle. We’ll all be better off for it.

I think that covers it, right Karine?

Ultimately, the quality of our research depends on the quality of the data we collect, and at this point, we have no choice but to prioritize sample quality to ensure a sustainable and thriving industry.

A big thanks for joining us on today’s episode, Karine.

Thanks for having me.

For you listeners out there, we’d really appreciate if you subscribed to the podcast, shared this with others, or leave a review. Thanks for listening to Now that’s Significant.

No Comments Yet

Let us know what you think

Subscribe by email