Resources

David Boyle on ChatGPT and using it in our market research roles

Written by Infotools | 07 Mar 2023

We welcomed David Boyle to our podcast, co-author of the book, PROMPT: A practical guide to brand growth using ChatGPT, in which Boyle and his co-author share their 35 combined years of brand-building expertise and explain how ChatGPT can help with each step of the process. Boyle said, “We took a few days and went through everything we did for a typical brand building process – from understanding audience needs, segmenting, targeting and positioning and building innovations" - and it is some of this work on which he based the book. He calls the technology “remarkable”, while also explaining some of its limitations and why humans will always have a primary role in marketing and research.


He talks more with us about the power of generative AI, giving us several examples of how he’s put it to work in real-life, on real projects. Boyle shared a recent project he ran, helping a client plan marketing activities for a celebrity tour. The AI technology helped the team create rich audience segments, build an understanding of these segments, and identify opportunities to position the tour events differently for each segment. Boyle said he’s seen multiple ways in which this technology can fast forward a research and analysis process that would have taken weeks in the “old world” and provide almost instant solutions.

While we are calling generative AI “Chat GPT” in conversation (much like we call tissues “Kleenex” or bandages “Bandaids) Boyle talks about the fact that there will be more sufficiently advanced AI chatbots coming to the mainstream. We need to learn to make this technology work for us, and it will get better the more people use it. He says we need to ask ourselves “how do we make it work and use it to be as useful as possible to us? The whole idea of training is absolutely key because the more data sets it has, the better and more concise, representative answers it will give.” This ties into the fact that humans are actually “training” this technology.

Boyle said that the right balance between computation, automation, and humanity is still important. Obviously, for ChatGPT to start a task, it needs human input. The person using the tool needs the skill and expertise to start the task and, once they receive the output, they must decide what they want to use, remove, edit/re-word, etc – and that level of human intervention is critical. “This is all human judgment and expertise. That will never go away.” He says that the output you end up with is all driven by the user: starting, editing, iterating, judging it to be publishable or usable. The work is just aided by ChatGPT, just as other more mainstream technology or our team members aid us in doing our jobs every day.

As researchers, we generally start a qualitative or quantitative project, follow respondents through the process, analyze the data, interpret it, write a report, conduct quality checks – all so you can attest and follow the evidence chain that the data is accurate. ChatGPT itself doesn’t allow this level of auditability. Right now, it can do things like come up with audience segments and trends, and even insights, but researchers must treat its output as a hypothesis to be tested in the real world and “validated using your industry expertise.” He maintains that human skills and expertise are not in danger, but people do need to work out how to use generative AI to be as helpful to them in their jobs as possible.