Question: Who Actually Wants/Buys Synthetic Users/AI personae?
Author's note: I was on Reddit the other day, and somebody asked this question. It occurred to me, hey, that would make a good topic for the blog. I have been thinking about adding a new section: short questions, short responses, in addition to the longer pieces I already publish once or twice a month. Consider this the first one. So. Who the hell buys synthetic users?
Answer:
People who want to feel like they did research without actually doing research. That is the whole market. That is the entire customer base. Everyone buying these tools is purchasing the same thing: permission to skip the hard part.
Let me be more specific, because I am in a mood.
Product managers in feature factories buy this. They need something to paste into Jira before standup, and they need it fast. They are not rewarded for being right. They are rewarded for shipping. A tool that produces confident-sounding garbage in four minutes lets them check a box and move on. The garbage does not matter. The box matters. These tools exist to check boxes.
Founders buy this. Founders who think recruiting is the bottleneck, as if the problem with their product decisions is that they cannot get five people on a Zoom call fast enough. The actual problem is they do not want to hear what those five people would say. An AI persona will not tell you your baby is ugly. It will produce a severity rating and some bullet points and let you sleep at night. That is the product. Comfortable lies, formatted professionally.
Executives buy this. Executives who need "data-driven" on the slide but do not actually want data, because data is messy and uncertain and sometimes says the strategy is wrong. An LLM will never tell you the strategy is wrong. It will identify "potential friction points" and "opportunities for improvement" and other phrases designed to sound rigorous while committing to nothing. You can take that into a board meeting. You cannot take "we don't actually know yet" into a board meeting, even when that is the truth.
Organizations with no research infrastructure buy this, and honestly, these are the ones I feel for. No ops, no panel, no budget, no time. They are not choosing synthetic users over real research. They are choosing synthetic users over nothing. And nothing feels worse than something, even when the something is a language model hallucinating about user behavior. I get it. I do not respect it, but I get it.
Procurement departments buy this. "Standardized and scalable insights" fits beautifully in a vendor rubric. "Messy, inconsistent, expensive humans" does not. Procurement optimizes for checkboxes, not truth. These tools are engineered for procurement.
Here is what all these buyers have in common: they are not paying for insight. They are paying for the feeling of insight. They are paying for artifact production. They are paying to never have to sit across from a real person who is confused by their product, because that experience is uncomfortable and slow and sometimes politically dangerous.
And the tools deliver exactly what they are paying for. The output is confident, language models do not hedge like uncertain humans hedge. The output looks like rigor, bullet points, severity ratings, task success percentages, all the aesthetic markers of systematic analysis. The output is emotionally convenient, no scheduling, no cancellations, no participant who tells you the thing your VP loves is incomprehensible. Just you and your laptop, getting validated by a machine that cannot disagree in ways that hurt.
The problem, of course, is that none of it is real.
The AI does not know your user is on a bus with a screaming toddler. It does not know they distrust your company because of that thing from three years ago. It does not know they have seventeen tabs open and a meeting in four minutes and a vague sense of dread about the economy. It knows what is statistically common in its training data. Your users are not statistical averages. They are specific people in specific contexts, and the specific context is the whole game.
A language model can describe confusion. It cannot experience confusion. It cannot sit there stuck, genuinely uncertain what to click next. A description of confusion is not confusion. You can write a poem about drowning without getting wet.
And when the AI says "users may struggle with this step," you have no idea if that is true. You cannot estimate the error rate. You cannot check the source. You cannot distinguish signal from hallucination. The output is unfalsifiable, which means it is not research. It is opinion laundering with a subscription fee.
These tools can help with heuristics, preflight, and training. Hypothesis generators, not validators. The moment you treat output as evidence, you have left the building.
Why does this market exist? Because research is slow, teams are punished for uncertainty, and confident fiction is rewarded over humble truth. Synthetic users are not the solution. They are the symptom.
If you removed humans, you removed the whole point.