7 min read

UXR in the Age of AI — Part 2: Qualitative Research, Now with 0% Human Contact and 100% Synthetic Empathy

We don’t have time for real users.” Cool—so now we simulate them. Welcome to the era of synthetic empathy: AI-generated personas, fake quotes, and insights that sound smart but aren’t. It’s faster, cheaper, and totally hollow. And we’re all pretending it’s research.
UXR in the Age of AI — Part 2: Qualitative Research, Now with 0% Human Contact and 100% Synthetic Empathy
A1


Author’s Note: If you missed Part 1 on how AI broke quant research (faster, dumber, shinier), you might want to check it out here before diving into the surreal world of AI qualitative tools. Spoiler: It doesn’t get less horrifying.

AI isn't replacing the function of research—it's replacing the appearance of it.

Welcome to the era of synthetic empathy—AI tools churning out personas, quotes, and fake understanding at scale. Faster. Cheaper. Totally hollow.

"We don't have time for real users."

The executive wasn't being malicious when he said those words in our quarterly planning meeting. Just practical. Deadlines were tight. Research was taking too long. Why interview eight actual customers when an AI could simulate 800 in minutes?

And there it was: the mindset that's quietly killing authentic user understanding across our industry. We're all complicit in letting it happen.

THE ALLURE: WHY THIS IS SO DAMN TEMPTING

Let's acknowledge the seduction first. There's a reason executives and product teams are falling for synthetic research:

  • It solves the impossible triangle. Fast, cheap, and good—AI promises all three when real research forces you to pick two.
  • It scales beautifully in spreadsheets. "We analyzed feedback from 10,000 users" sounds infinitely more impressive than "We spent three hours with eight users."
  • It delivers certainty in a sea of ambiguity. Real research is messy, contradictory, and filled with caveats. AI research delivers clean, actionable bullet points.
  • It never brings unwelcome news. AI insights tend to align conveniently with existing roadmaps. What a coincidence!

I get it. The appeal isn't mysterious. If I could press a button and get instant insights without the awkwardness of watching real humans struggle with my product—I'd be tempted too.

But it's like hiring a Roomba to run your therapy sessions. It'll move around the room efficiently, but it won't understand a damn thing about what makes you human.

THE SYSTEM: DESIGNED TO MAKE FAKE INSIGHTS INEVITABLE

The problem isn't just bad AI—it's an entire ecosystem engineered to prioritize the illusion of insight over actual understanding:

  • OKRs that reward velocity over truth. "Ship 15 features this quarter" will always triumph over "deeply understand 3 user problems."
  • Research teams measured by output quantity. "The research team completed 72 studies last quarter!" sounds better in an all-hands than "We spent three months understanding one critical user journey."
  • Product timelines that don't allow divergence. When the roadmap is locked before research begins, insights become performative decoration, not decision drivers.

I recently watched a senior PM present "user findings" based entirely on AI-processed support tickets. When asked if they'd verified these patterns with actual users, they looked confused. "The sample size was over 10,000," they said, missing the point entirely.

What you're normalizing isn't efficiency. It's organizational self-deception. You're training your company to make million-dollar decisions based on algorithmic astrology while calling it "data-driven."

THE UNCANNY VALLEY OF SYNTHETIC INSIGHTS

Here's what AI qualitative output actually looks like in the wild:

From a widely used research 'insight platform' known for pastel UIs and automatic everything:

"Theme: Seamless Integration. Users consistently express a desire for better integration with existing workflows, with 87% of sentiment positive when discussing potential improvements."

From a ChatGPT-generated persona:

Tech-Savvy Tina, 34 Urban digital native Product Manager at a mid-size SaaS company "I love frictionless experiences that respect my time. When an app just works, I feel an emotional connection to the brand."

Real humans don't talk like marketing copy. They say things like:

"This thing is weird. Where's the button that used to be here?" "I hate that I have to click through all this crap just to do the one thing I actually need."

The AI versions sound better in executive summaries. They just bear no relationship to reality.

THE CONSEQUENCES: WHAT WE LOSE

Let me tell you about Maria. Not an AI-generated persona—a real human. My aunt.

Maria is 62, manages a dental office, and has used the same scheduling software for 15 years. When her vendor rolled out a "modernized, AI-enhanced" interface, the company's AI-powered sentiment analysis of customer tickets showed "moderate frustration but overall positive reception to the streamlined design."

The reality? When I visited her last month, I saw her struggling but trying to hide it. She clicked the same non-functional icon six times, growing quieter with each failure. When I gently asked what was wrong, she just shrugged: "Oh, I'm just not good with computers. I'm sure it's fine."

What no AI could detect: Maria had created an entire system of workarounds. Simple tasks now took her twice as long. She blamed herself, not the software. Later, over coffee, she confided something that broke my heart—she was considering early retirement because her job had become too stressful since the "upgrade."

No sentiment analysis would have caught this. No AI would have seen the resignation in her eyes or heard the self-blame in her voice. No algorithm would have connected these dots to realize that a "modernized interface" was effectively ending someone's career.

This is what we lose when we replace human observation with digital simulation. Not just data points—human dignity.

THE STAKES: REAL PRODUCT FAILURES, NOT HYPOTHETICAL FEARS

If we continue down this path, the consequences aren't theoretical. They're already happening:

  1. We build solutions to problems no one has. Remember Google Wave? It solved collaboration problems that existed in PowerPoint decks but not in real life. Modern AI-driven research creates Wave-like products daily—beautiful solutions to fictional problems.
  2. We waste millions on features no one wants. A fintech company I consulted for very recently spent $4.2 million developing an AI-powered "financial assistant" their research showed users were "eager to engage with." Real observation later revealed users actively avoided it, seeing it as intrusive and unhelpful. The feature is now on track to be sunset.
  3. We lose competitive advantage. TikTok didn't beat Instagram with better algorithms—they won by deeply understanding user behaviors Instagram missed. While Meta was A/B testing button colors, TikTok was watching how real teenagers actually wanted to express themselves.
  4. We’re gutting UX research teams under the guise of “efficiency.” I’ve seen it firsthand at two major tech companies—teams of four or five reduced to one overworked researcher expected to validate AI-generated fluff with just enough “real” research to make it look credible. The rest? Laid off. What’s left is a performative mess: one person juggling tools they didn’t ask for, trying to turn bullshit into strategy decks.
  5. We design for convenience, not needs. Look at the healthcare industry, where AI-driven UX has created patient portals that score high on usability metrics but fail catastrophically for elderly users, low-literacy users, and anyone in actual distress.

The cost isn't just bad products. It's digital environments optimized for fictional humans while real ones struggle, give up, and blame themselves.

THE ALTERNATIVE: INCONVENIENT, UNSEXY, AND NON-OPTIONAL


Real research doesn’t sparkle in slide decks. It doesn’t scale. It doesn’t fit neatly into roadmaps or OKRs. It’s watching someone struggle in silence, clicking the wrong thing three times before quietly giving up. It’s sitting in awkward pauses and realizing the feature everyone’s proud of is actively making users feel stupid.
It takes time. It makes stakeholders uncomfortable. It tells you the thing you didn’t want to hear. That’s the job.

The insights that actually change product direction? They don’t come from sentiment clustering. They come from noticing what isn’t said, from patterns that only make sense after you’ve watched five different people hack their own way through your design.

You don’t need 10,000 AI-simulated users. You need to watch five real ones closely enough to see the truth you’ve been avoiding.

THE NON-BULLSHIT GUIDE: WHAT ACTUALLY WORKS

If you're already deep in the AI qual tool ecosystem, you don't have to abandon ship entirely—but you do need to steer it differently:

Treat Direct Observation as Non-Negotiable No product decision should be made without key team members having directly observed actual users. Not recordings. Not transcripts. Actual live humans struggling with your product in real time.

Use AI for Preparation, Not Insight Let AI handle transcription, organization, and finding examples once YOU'VE identified meaningful patterns. It's a research assistant, not a researcher.

Flip the 80/20 Rule Spend 80% of your time with actual users. Let AI handle the 20% of grunt work—transcripts, search, initial clustering. That's it.

Run a Parallel Process Compare AI-led research with a real, human-led approach. Show stakeholders what's missing. Make the blind spots visible.

Create Organizational Antibodies The next time someone asks, "Can't we just simulate users?" have ready examples of critical insights that would have been missed. Make the risk real.

THE CLOSING ARGUMENT: CHOOSE YOUR FUTURE

There are now two paths diverging in product development:

One relies on simulations, sentiment scores, and synthetic quotes that sound plausible in board presentations. It's faster. It's cheaper. It scales beautifully. It produces products that look good in demos but fail in life.

The other insists on the messy, inefficient process of actually watching humans struggle with our creations. It's slower. It's more expensive. It doesn't scale. And it's the only path that leads to products people actually want to use.

Technology should augment human understanding—not impersonate it. The second we pretend otherwise, we stop designing for reality and start hallucinating our users.

The choice isn't about methods or tools. It's about whether we still believe that understanding humans requires actually encountering them.

So here's your choice:

You can build products for synthetic humans who express their emotions in sentiment scores, navigate interfaces with perfect rationality, and speak in marketing copy.

Or you can build for messy, contradictory, wonderfully unpredictable real humans who will always surprise you.

One path leads to products that look fantastic in case studies and fail spectacularly in the real world.

The other leads to products people actually use, love, and remember.

Your AI tools promised you could have both. They lied

P.S. I'm working on a third piece that pulls together how we actually use AI smartly across the entire research lifecycle without losing our minds (or our users). Coming soon!

🎯 Still think this doesn't apply to you?

The next time your team suggests skipping user sessions to save time, remember: you're not saving time. You're borrowing it—at predatory interest rates. You'll pay it back tenfold in failed features, confused customers, and lost market share.

I write anywhere from one to three longform UX essays a week — equal parts strategy, sarcasm, and coping mechanism.

👉 Subscribe now if you believe that understanding humans requires actually talking to them.