This website requires JavaScript to function properly.
Please enable JavaScript in your browser settings.

Our thinking

Turkeys voting for Christmas

Creating AI moderators: are we really turkeys voting for Christmas?

Critics accuse agencies developing AI solutions of stuffing the industry from within. Pui-Tien Man suggests establishing the right rules to ensure we don’t replace the researcher but instead unleash their potential.

When Simpson Carpenter launched its humanoid avatar moderator, Quasai, earlier this year, the reaction was swift. The goal was to create new value and expand research capabilities, but some commenters saw it differently, seeing it as an act of self-sabotage. We were, one wrote, “turkeys voting for Christmas.”

This existential fear is understandable, especially when headlines keep attributing corporate job cuts to AI. But the “turkey” argument seems a gross disservice by presuming researchers are mere data collectors, rather than critical sense-makers. It also overlooks the most important people in any market research study: the participants.

Change is scary, and progress uncomfortable. But people forecasted the death of face-to-face when telephone interviews arrived, and online panels were predicted to automate us all out of a job. Using AI for data collection is inevitable, because its efficiencies are unparalleled.

The critics do have a point: reckless adoption is a death sentence. It’s dangerous to simply cede the driving seat to the algorithms. That’s why, when building Quasai, we refused to put efficiency first. Simpson Carpenter is a trusted traditional agency, not a tech start-up: we have a reputation to protect. We established four governing principles to ensure we enhance our craft rather than dismantle it:

1. Redefine, rather than replace, the researcher.

Firstly, we must accept that AI does change the job description. AI agents can automate processes, but they cannot yet replicate human vision or instinct. Our value shifts upwards from data gathering to becoming insight architects and engineers. We must judge what matters, shape hypotheses, and act as the arbiters of rigour, even when the goalposts shift.

As possibly the last cohort to be managing an all-human workforce, we have a responsibility to the next generation to teach director-level skills as early as possible, because tomorrow’s researchers won’t just be running projects: they will be orchestrating a suite of human and AI tools to meet strategic goals.

2. Use AI with intention.

Second, AI cannot be a cheap and cheerful shortcut to achieve the same ends. It should be deployed selectively. We look for gaps where scale, scope, cost, or timing constraints would have rendered a project unfeasible – like running over 500 20-minute deep-dives in five languages with a consistent moderation style. If AI allows us to answer questions we previously couldn’t afford to ask, it’s an innovation. But if it just churns out generic data for less money, it is a race to the bottom.

For example, our drive to develop a hyperreal video avatar was not for hype, but because faces command attention. We’re not using AI to mimic humans – we’re using it to unlock behaviours humans cannot at this speed and scale. We are transparent upfront that the moderator is AI, and paradoxically this transparency, combined with the human psychological response to trust what looks like us, creates a safe space where people don’t feel judged for their opinions. Every design choice must serve a clear research purpose like this, with integrity as a core tenet.

3. Adopt a developer mindset.

Third, we need to reassess our predilection for perfection. Researchers are trained to be risk-averse and precise, but the speed of tech development means we cannot wait for a finished product. With new models being released at speed, there is no finish line. At Simpson Carpenter, we’ve needed to get comfortable with iteration and factor in the constant updating of our tools as foundational models improve.

4. Prioritise the participant experience.

Finally, we need to meet people where they are. Our industry has long battled declining data quality, attention spans, and response rates. We need to be designing research that reflects modern culture and working to include underrepresented or hard-to-reach groups – whether that’s night-shift workers or the growing number of neurodivergent people who find interacting with AI easier than a human interviewer.  

Study after study shows people are often more willing to be emotionally vulnerable with AI. An analysis of 100 trillion tokens by OpenRouter found that 52% of all open model usage is roleplay. Similarly, a National Bureau of Economic Research paper showed that “self-expression” including chitchat, personal reflection, and roleplaying, is a growing part of how people use AI tools for non-work purposes.

Our own studies support this. Of almost 900 participants, 83% said they felt more comfortable sharing their opinions with our avatar compared to sharing them with a human. For topics where controversial views might shape decisions, an impartial AI moderator can be a gateway to uncomfortable truths. It meets people where they are: on a screen, anonymous, and in control.

Does that make us turkeys voting for Christmas?

We don’t build tools to make ourselves obsolete; nor do we do it to jump on the latest bandwagon. We are building tools to ensure research thrives by pushing our craft into new territories. And for all this talk of tech replacing us, here we are, creating new ways to harness it to our advantage. After all, Christmas is coming anyway – surely the greatest risk is making like ostriches and pretending otherwise.