419 Researchers Tried to Ban AI from Qual. A CEO responded. Here's What Everyone Is Missing.
Four hundred and nineteen qualitative researchers from 32 countries just signed an open letter declaring that generative AI is "inappropriate in all phases of reflexive qualitative analysis." Not some phases. Not with guardrails. All phases. Always. The conversation, apparently, is over.
Let me say that again. Four hundred and nineteen researchers whose entire professional identity is built on questioning assumptions, examining positionality, and sitting with complexity responded to AI by issuing a categorical statement that refuses to question its own assumptions.
The people who teach reflexivity produced the least reflexive document I've read this year.
And I am regular on LinkedIn.
The Letter
Before we get into what's wrong with it, let's be fair about what it actually says, because roughly 90% of the people arguing about it online haven't read past the headline.
The letter targets a specific category of qualitative work: "Big Q" reflexive approaches like reflexive thematic analysis, phenomenological methods, ethnographic analysis, and discourse analysis. It is not a blanket ban on AI in all research. It's not about quantitative methods. It's not even about all qualitative methods. It's about reflexive approaches that depend on the researcher's subjectivity and positionality as a feature, not a bug.
It makes three claims.
One: GenAI is simulated intelligence. It predicts tokens, not meaning. Therefore it cannot do reflexive qualitative analysis, which requires genuine meaning-making. The letter warns that AI's algorithmic patterns predispose it to "identify, replicate, and reinforce dominant language," risking "the further quieting of marginal voices."
Two: Qualitative research is a distinctly human practice, done by humans, with or about humans, for the benefit of humans. Only a human can do reflexive analytical work, and therefore AI is inappropriate at every stage, including initial coding.
Three: GenAI carries serious ethical harms. Data centers consume staggering amounts of energy and water. Workers in the Global South who train and moderate these models face exploitation and psychological harm. As qualitative researchers concerned with social justice, the signatories argue, we should reject tools built on these practices.
The signatories include Virginia Braun and Victoria Clarke, whose reflexive thematic analysis framework has been cited nearly 300,000 times. When they put their names on something, people don't just listen. They change how they work. That kind of reach matters.
Now. The ethical concerns in point three are real. The environmental costs of AI are real. The labor exploitation is real. Any researcher with a functioning conscience should grapple with that.
But the letter doesn't ultimately rest its case there. It rests on an ontological claim: meaning-making is human, AI is not human, therefore AI has no role. Full stop. Roll credits.
But that's not an argument you can engage with. It's a boundary, and boundaries aren't meant to be debated. They're meant to be enforced. Which is exactly how the letter reads. And there's something deeply ironic about researchers who built their careers on interrogating assumptions choosing to plant a flag and refuse to interrogate this one.
The Rebuttal
Times Higher Education published a response by James Goh arguing the letter is absolutist, rooted in identity defense rather than evidence, and disconnected from what researchers actually find when they use AI carefully.
Goh makes three points worth taking seriously, even if you have to hold your nose while reading them.
Absolutism is a weak move when the world is messy. "Never" statements shut down legitimate nuance. A categorical stance also conveniently avoids the harder work of defining boundaries, disclosure norms, and responsible practice. It's much easier to say "no" to everything than to do the intellectual labor of figuring out where the line actually is.
The scale argument is real. Goh cites an Arizona State study involving 371 transcripts. The researchers themselves noted it would have taken two research assistants the better part of an academic year to analyze without AI. If the only acceptable qualitative analysis is the version that requires massive labor, then only well-funded institutions get to do it. That pushes out under-resourced teams and communities, including much of the Global South the letter claims to defend. The irony is thick enough to spread on toast.
The best use case for AI is friction, not efficiency. This is his strongest claim. In the same study, the interesting finding wasn't the 96% agreement rate between AI and human analysts. It was the 4% where they disagreed. In one instance, the AI classified a student's personal anecdote as legitimate evidence, something the human analysts had dismissed. That disagreement forced the researchers to re-examine their own assumptions about what counts as reasoning. The AI didn't make meaning. It created a moment where the humans had to be more explicit about their meaning-making. That's not replacement. That's a stress test.
But Goh's piece has problems too, and they're significant.
His framing of the letter as "identity defense" is too simple. Yes, there's identity at play. But the dynamic is also institutional. People choose blunt rules when they can't enforce nuance. If you're a department chair and you know that half your grad students will use ChatGPT to "analyze" their interview data and then dress it up with reflexivity language, a blanket ban starts looking less like ideology and more like risk management. It's governance by exhaustion. Not admirable, but understandable.
His examples don't settle the core conceptual objection. Agreement rates and large-scale theme identification are not the same thing as interpretive responsibility. The Arizona State study demonstrates something specific: that AI can code transcripts with high reliability. That's useful. It's also not the same as claiming AI can participate meaningfully in the interpretive process. Goh's piece risks swapping the actual debate (interpretation and accountability) for an easier one (accuracy and efficiency). Those are not the same conversation.
And then there's the thing that everyone who read the piece noticed immediately. Goh is the CEO of AILYZE, an AI platform for qualitative research. He discloses this. He even acknowledges it. But disclosure isn't neutralization. Every person who reads that piece will discount some percentage of the argument because the author is selling the thing he's defending. That doesn't invalidate his points. But it explains why the letter's signatories aren't exactly rushing to reconsider. The conflict of interest is doing a lot of heavy lifting.
My Take
I run both qualitative and quantitative research at one of the largest tech companies on the planet, on timelines that would make most academics break out in hives. I take AI risks seriously. And I think this letter is the wrong response.
Here's why.
The real problem is not AI. The real problem is that most qualitative research is already a black box.
AI didn't create the accountability gap in qualitative work. AI just made it embarrassing.
The dirty secret of this field, and I say this as someone who loves qual and has done it for over a decade, is that "reflexivity" in practice often functions as a magic word that stops scrutiny rather than inviting it. Researchers invoke reflexivity like a talisman. "I was reflexive." Great. Show me. Show me the moment where you changed your interpretation because the data pushed back. Show me the counterevidence you wrestled with. Show me the alternative framing you considered and rejected, and tell me why you rejected it.
Most can't. Most won't. Not because they're bad researchers, but because the field has never built the infrastructure for making interpretive reasoning visible and inspectable. We have methods for doing the analysis. We don't have methods for showing the analysis in a way that anyone else can meaningfully evaluate.
And now a tool shows up that gives you a different answer and forces you to explain why yours is better. A tool that surfaces patterns you missed and makes you document whether they change your interpretation. A tool that confronts you with counterexamples and forces you to write down your reasoning for dismissing them.
That sounds a lot like reflexivity to me... Just with the receipts showing.
The thing the letter's signatories fear most, AI replacing human judgment, is actually less dangerous than the thing they've normalized: human judgment that can't be inspected.
A categorical ban protects comfort, not rigor.
The letter frames its position as protecting the integrity of qualitative research. But integrity isn't preserved by refusing to engage with difficult questions. Integrity is what you get when you engage with them and come out the other side with your reasoning intact and visible.
Four hundred and nineteen qualitative researchers, people who built careers on sitting with discomfort, interrogating assumptions, and resisting easy certainty, looked at the most complex methodological question of their generation and said: No. Absolutely not. We're done here.
The letter claims to protect marginalized voices while making decisions about those voices' research methods without including them in the conversation. That contradiction is worth sitting with.
The only question that matters is not about AI.
Stop arguing about whether AI can "make meaning." That question is a philosophical rabbit hole that will keep epistemologists employed for decades and produce exactly zero useful guidance for anyone trying to do research next Tuesday.
The question that actually matters: who owns the interpretive call, and can you show your work?
That's it. That's the whole thing.
Translate it into plain language and you get something any team can use. What was done by a person? What was assisted by a tool? What decisions were made, and why? What evidence supports the claim? What evidence contradicts it? What uncertainty remains?
If you can answer those questions for every insight in your work, you've achieved more accountability than 90% of qualitative research currently demonstrates, with or without AI.
If you can't answer those questions, then it doesn't matter whether you used AI, NVivo, a whiteboard, or tea leaves. Your analysis is a black box, and the fact that a human sat inside it doesn't automatically make it rigorous.
Where the Line Actually Is
Reflexive interpretation stays human. The moment you let a tool author the story of what the data "means," you've crossed the line. Not because the tool is stupid. It might not be. But because interpretive claims carry responsibility, and responsibility requires a person who can be questioned, who can explain their reasoning, and who can be wrong in a way that matters.
But tools can help with navigation, recall, consistency checks, and surfacing counterexamples. Used well, they create friction, not shortcuts. They surface the inconsistencies you'd rather not deal with. They ask the annoying question you forgot to ask yourself.
The line is not "allowed versus not allowed." The line is bounded use in a bounded loop.
You scope a narrow question. You collect a small dataset. You do your interpretive work, the human kind, the slow kind, the kind Braun and Clarke are right to protect. Then you use a tool to pressure-test it. Does it find patterns you missed? Does it flag contradictions in your coding? Does it surface an alternative framing that makes you uncomfortable? Good. That discomfort is the point.
Then you go back, revise if needed, document what changed and why, and move on.
Small loops. Explicit claims. Auditable reasoning. You can't write "participants expressed a nuanced relationship with affordability" and call it a finding. You have to say what you found, what confidence you have, what could be wrong, and what you're doing next to check.
Stop Writing Manifestos. Do This Instead.
Enough with the open letters.
Stop categorical bans and stop vague promises. Both are lazy. Both avoid the real work.
Require disclosure of tool use in the methods section. Not buried in a footnote. Not hand-waved as "AI-assisted." What tool. What stage. What decisions it informed. What decisions it didn't.
Reward teams that show uncertainty and counterevidence. The researcher who says "here's what I found, here's what contradicts it, and here's what I'm still unsure about" is doing better work than the one who delivers a tidy narrative with no loose ends.
Build audit infrastructure, not ideology. Train researchers to make their reasoning visible. Not everyone gets an academic year to sit with their data. That's not a moral failing. That's a resource reality.
And read the things you argue about. The letter targets reflexive qualitative approaches specifically. Half the discourse is fighting a ban that's broader than what the letter actually proposes. If the field can't accurately represent a one-page document, maybe we have bigger problems than AI.
Your Call
The open letter is a signal that people feel their craft is under threat. That feeling deserves respect.
But here's what I keep coming back to. The people who taught an entire generation of researchers to sit with discomfort, to question their own certainty, to resist the clean and easy answer, looked at the hardest methodological question of their careers and flinched.
Flinching is human. But so is catching yourself, taking a breath, and doing the harder thing: figuring out the rules instead of refusing to play.
꒰ ✉︎ ꒱ If you made it this far without signing an open letter against me, you might as well subscribe. One essay a week at thevoiceofuser.com. No signatures required.