Resources

Subscribe

Blog Post

Podcast: Lynn Pellicano on Market Research Sample Quality

Simon-Kucher & Partners' Lynn Pellicano joins the podcast to talk about practical ways to improve sample quality. From pre-survey tactics and better survey design, to in-field monitoring - get better market research outcomes.

Back to Resources / Podcast: Lynn Pellicano on Market Research Sample Quality

Lynn Pellicano, Senior Market Research Manager at Simon-Kucher & Partners, joined us on our podcast to talk about why it is "time we stopped cleaning our sample data." Simon-Kucher and Partners is are regarded as the world's leading pricing advisor and thought leader, and have over 1,600 consulting professionals worldwide. During the interview, we talk with Lynn about a persistent issue in the market research industry: sample and data quality. She says that this significant, endemic challenge is worth re-visiting, because we need to continue to address it with a variety of techniques and methods that will deliver better outcomes. 


Lynn says that sample quality issues are especially prevalent when it comes to B2B sample, where she conducts much of her work, but B2C does suffer as well. She talks specifically about the fact that the opt-in panel is no longer sufficient for niche audiences. Approaches like custom recruitment (phone is back!), verified online panels, expert networks and more are better sources for highly nuanced audiences. Paying more money up front for higher quality sample that fits the specific needed audience profile will save time data cleaning on the back-end, leaving more time for uncovering insights that matter. She touches on some specific examples where data cleaning could impact client delivery timelines, reiterating the need for clean, quality sample from the get-go.

We talk with her about some practical things that can be done to improve quality such as shorter surveys, device agnostic survey and multiple recruitment methods to find audiences that may not be prolific online. Zooming out, she says to improve sample pre-survey, we should be using more verified methods to recruit respondents (to ensure people are who they say they are) so the right people enter the survey in the first place. Additionally, employing fraud-mitigation technology that screens out "bad" respondents through a variety of advanced, data-driven methods. This can include things like data on geolocation, language, IP addresses and more to help keep people out  or confirm people before they enter. 

To expand on a point that Lynn believes is critical to sample quality, we dive into the specifics of the "screener." She helps consultants across the Simon-Kucher ecosystem with this piece quite often, as it is one of the "best defenses against poor quality data." In her opinion, no amount of quality checks are as effective as a screener that can remove respondents that are not a good fit or are not desirable for a survey. She advises to start broad, and then narrow it down from there, being careful not to make assumptions about the people who are starting a survey. Confirming screening criteria, on top of panel profiling that's available, is always a good idea. We discuss some very specific questions to consider, as well as finding the balance between the need for screening and the need for shorter surveys overall. 

Consistent, efficient and accurate data quality checks and field monitoring, with knowledgeable project managers,  is always important. She gives some examples of how this works and what it should look like, and how it can keep surveys on track while they're in field - avoiding data quality issues in the end. Looking for things like speeding, straight-lining, bad open end responses and red herring question responses, plus numeric outliers and contradictory and probable responses, can be part of this stage of the process.

Make sure to listen to the full podcast to get detailed information on how you can work to improve quality and make your processes more efficient, as well as our musings about how - if we were to get everything right in the sample space (as improbable as it may be) - what would happen to quality and insights overall.

Related Resources