Resources

Ray Poynter on Leadership in Insights, and AI

We welcomed Ray Poynter back to the podcast to discuss some of the unique challenges being faced by heads of corporate insights teams. Ray is the Managing Director of the Future Place, founder of NewMR, is the ESOMAR 2023-2024 Council President and also is leading up the ESOMAR AI Task Force. He says that Chief Insights Officers need to be leaders and facilitators, coaching and encouraging best practices. They need to move away from being order takers, as this type role continues to decline with the rise of super-smart self-serve tools, then they need to be coaches, they need to be encouraging best practices. This shift reflects a transition toward classic leadership in the field.



Guest host Geoff Lowe and Ray move on to discuss a hot topic in the industry: data quality, which remains a critical aspect of the insights landscape. Ray points out that there have been quality issues with online data for decades, and cites some statistics and specific challenges that researchers continue to face. He also talks about the Global Data Quality Initiative, a consortium of industry associations from around the globe, which aims to improve data quality across the industry.


Corporate insights leaders need to ensure the quality of data they use on behalf of their clients and be aware of the ongoing issues that are affecting quality, from the way surveys are written and deployed to the impact of AI on the process.

The conversation then turned to AI, and Ray covered a few ways that he’s seen the technology work well - and other places where we should exercise caution. Overall, he thinks adoption is too slow. “A lot of organizations have simply banned the use of generative AI internally for all members of staff, including the insight teams and I can see why they're doing that.” But at the same time, innovative companies are using AI solutions effectively to enhance what they are doing, and Ray gives some very specific examples on AI conducting tasks like improving survey questions and writing. 

Back to the role of insights professionals, Ray says, “We need to be working on focusing on the shift from research questions to business questions. So traditionally, in insights, we've focused on research questions, but actually we're trying to help companies make better decisions. Almost no corporation has a “market research project”, they have a new drink project, they have a new movie project, they have a new railway building project and they need to answer some questions and that's when we bring in insights to help make those decisions. So really focusing on how we help the organization make better decisions, what are the problems they have?” 

Geoff and Ray discuss the future of the space, and how roles will change. As the industry progresses, insights professionals must concentrate on understanding their organizations better and determining how insights can enhance decision-making. This shift toward more human-centric thinking is expected to remain relevant for the next five to ten years, even as discussions about more advanced AI persist. 

The “AI revolution” can be compared to other massive changes in history, like the industrial revolution, which had both negative and positive impacts on society - and the economy. There is a potential for disruption, and the potential for people to lose their jobs. The percentage of affected people may be small in the broader context, but it's significant for those directly impacted. For instance, in the market research industry, automation has displaced many jobs, and AI could affect insights professionals. Screenwriters in the advertising industry have already seen AI-generated content replacing their work. While it empowers creativity in some areas, it also eliminates jobs in others. 

Shifting our focus back to AI in our industry, the podcast discussion explores ethical concerns surrounding synthetic responses and AI-generated data generated by AI, which are on the rise. Synthetic data is not entirely new; it's been used in various forms before, but now its adoption is more widespread. The challenge lies in assessing the reliability of synthetic data, as it can be hard to verify. In addition, insights professionals need to be cognizant of the importance of transparency, communication of confidence levels, fair use and more. For example, corporations using external data to create reusable AI tools raises questions about ethics. This parallels the issue of AI using copyrighted content, such as books, for commercial purposes without permission.

In conclusion, there's much to explore and navigate in the realm of AI and synthetic data. It’s important as we move forward to clarify ethical and legal boundaries, even though they might pose challenges in understanding these complex topics.

No Comments Yet

Let us know what you think

Subscribe by email